From 9401d1f1f16ed979d217e540aa044f430699aa4d Mon Sep 17 00:00:00 2001 From: yoshi-code-bot <70984784+yoshi-code-bot@users.noreply.github.com> Date: Tue, 4 Jun 2024 00:30:18 -0700 Subject: [PATCH 1/2] chore: Update discovery artifacts (#2411) ## Deleted keys were detected in the following stable discovery artifacts: aiplatform v1 https://togithub.com/googleapis/google-api-python-client/commit/9d6000fa065ac1ef877de37b94a5e923c89b8228 contactcenterinsights v1 https://togithub.com/googleapis/google-api-python-client/commit/bb49784a9cb793ff64c8e1d4ee3b98a173b4e31d ## Deleted keys were detected in the following pre-stable discovery artifacts: aiplatform v1beta1 https://togithub.com/googleapis/google-api-python-client/commit/9d6000fa065ac1ef877de37b94a5e923c89b8228 healthcare v1beta1 https://togithub.com/googleapis/google-api-python-client/commit/05c4657fa6322067b421e9e0d887904faba04811 ## Discovery Artifact Change Summary: feat(aiplatform): update the api https://togithub.com/googleapis/google-api-python-client/commit/9d6000fa065ac1ef877de37b94a5e923c89b8228 feat(analyticsadmin): update the api https://togithub.com/googleapis/google-api-python-client/commit/494a29d2266725566e185c41e19c08419c88f9b4 feat(androidmanagement): update the api https://togithub.com/googleapis/google-api-python-client/commit/5afc4010f2f7d303ba0b3a812aab7496aea97adb feat(backupdr): update the api https://togithub.com/googleapis/google-api-python-client/commit/5bcc5d39d04aa4691e36cc57b256d983ec52159b feat(chromemanagement): update the api https://togithub.com/googleapis/google-api-python-client/commit/32ddf526ff40d30f20f9116027a4f208f38cc792 feat(cloudbilling): update the api https://togithub.com/googleapis/google-api-python-client/commit/2b5c66b2c5d2ffaa649dd9455da765e10dbce113 feat(cloudfunctions): update the api https://togithub.com/googleapis/google-api-python-client/commit/34314fb79a2ef113f2f1db15738f2d2e29887222 feat(cloudsearch): update the api https://togithub.com/googleapis/google-api-python-client/commit/d32e900aeae99a2d7cab64037a2a0d8285aba8b6 feat(compute): update the api https://togithub.com/googleapis/google-api-python-client/commit/4f7da21c3c67d1019b996492e5dfc9dcacb38214 feat(connectors): update the api https://togithub.com/googleapis/google-api-python-client/commit/8087f14f8942261881ea87bf47fba512a78a9fc1 feat(contactcenteraiplatform): update the api https://togithub.com/googleapis/google-api-python-client/commit/4fb577d2d6e2851c8d923066c9ff7b5c1e9df79e feat(contactcenterinsights): update the api https://togithub.com/googleapis/google-api-python-client/commit/bb49784a9cb793ff64c8e1d4ee3b98a173b4e31d feat(datamigration): update the api https://togithub.com/googleapis/google-api-python-client/commit/ac474a90aeb6d2443b12c1bf891c7fb81dbcb9ed feat(dataplex): update the api https://togithub.com/googleapis/google-api-python-client/commit/d959b3d78c7034bbc3571d9ede7d6de3587989f7 feat(datastream): update the api https://togithub.com/googleapis/google-api-python-client/commit/3abd0f41f2e617749aba78913cb4fa6391df55a8 feat(dialogflow): update the api https://togithub.com/googleapis/google-api-python-client/commit/2d79840e8bfc7aa3bee79b9554627dfd1cb13121 feat(discoveryengine): update the api https://togithub.com/googleapis/google-api-python-client/commit/4522cd5e31c6437d52d8d8a09a54cf2c38fb7dcf feat(documentai): update the api https://togithub.com/googleapis/google-api-python-client/commit/a06827efcc41fe6af56f687f7c1dc4f8538a166b feat(fcmdata): update the api https://togithub.com/googleapis/google-api-python-client/commit/f7c50fd9f7b75df93ef9775684cba47b66cb0c81 feat(firebaseappcheck): update the api https://togithub.com/googleapis/google-api-python-client/commit/0744228b03e4c38e64358d9b38c17b2df3e2871e feat(healthcare): update the api https://togithub.com/googleapis/google-api-python-client/commit/05c4657fa6322067b421e9e0d887904faba04811 feat(iam): update the api https://togithub.com/googleapis/google-api-python-client/commit/331029f3a230aa25f32a75b9e81adf9d6ed97ed5 feat(integrations): update the api https://togithub.com/googleapis/google-api-python-client/commit/8bd4954709fc4bea245abd2efca870e8fdbc2c40 feat(migrationcenter): update the api https://togithub.com/googleapis/google-api-python-client/commit/b46b8b7081691a40f80241bfa154acc6d46abc9d feat(networkconnectivity): update the api https://togithub.com/googleapis/google-api-python-client/commit/ff49e0b244002d44580f689e0a3f77175bbe5dfb feat(policyanalyzer): update the api https://togithub.com/googleapis/google-api-python-client/commit/b56b2b1453126a06a9bcba1c96766a905006d3a7 feat(resourcesettings): update the api https://togithub.com/googleapis/google-api-python-client/commit/a5e25b381450da4c88bf86d24550fa7a75f4636a feat(run): update the api https://togithub.com/googleapis/google-api-python-client/commit/81892c895bfe7d8b5a60a1ce7c62f6bbd603a7b0 fix(secretmanager): update the api https://togithub.com/googleapis/google-api-python-client/commit/d0199eaf1f51289ad13683a54b6b26a5019b560d feat(servicecontrol): update the api https://togithub.com/googleapis/google-api-python-client/commit/0cfcab3609ec38a84d245cc3207cedc6ec92db5a feat(spanner): update the api https://togithub.com/googleapis/google-api-python-client/commit/195cae366ac9c01537584735879ef5ae658efee2 feat(versionhistory): update the api https://togithub.com/googleapis/google-api-python-client/commit/9cef71c5a52655e5e37b51ac0a430801c2cd97bd feat(workflowexecutions): update the api https://togithub.com/googleapis/google-api-python-client/commit/6670b1ea9d65e7574d77954cfd1722736bfa5d1c --- ...rojects.locations.batchPredictionJobs.html | 16 +- ...cts.locations.deploymentResourcePools.html | 15 + ...tform_v1.projects.locations.endpoints.html | 78 +- ...ions.featureOnlineStores.featureViews.html | 16 + ...rojects.locations.featureOnlineStores.html | 12 + ...s.locations.featurestores.entityTypes.html | 40 + ..._v1.projects.locations.metadataStores.html | 9 + ...platform_v1.projects.locations.models.html | 28 +- ...ts.locations.notebookRuntimeTemplates.html | 146 +- ...1.projects.locations.notebookRuntimes.html | 42 +- ...rojects.locations.persistentResources.html | 44 + ....projects.locations.publishers.models.html | 78 +- ....projects.locations.trainingPipelines.html | 16 +- ...form_v1.projects.locations.tuningJobs.html | 1072 +++++++++++ docs/dyn/aiplatform_v1.publishers.models.html | 4 +- ...rojects.locations.batchPredictionJobs.html | 16 +- ...ta1.projects.locations.cachedContents.html | 1259 +++++++++++++ ...cts.locations.deploymentResourcePools.html | 15 + ..._v1beta1.projects.locations.endpoints.html | 69 +- ...v1beta1.projects.locations.extensions.html | 32 +- ...ions.featureOnlineStores.featureViews.html | 24 + ...rojects.locations.featureOnlineStores.html | 12 + ...s.locations.featurestores.entityTypes.html | 40 + ...aiplatform_v1beta1.projects.locations.html | 5 + ...ta1.projects.locations.metadataStores.html | 9 + ...eta1.projects.locations.modelMonitors.html | 39 +- ...orm_v1beta1.projects.locations.models.html | 28 +- ...jects.locations.notebookExecutionJobs.html | 113 +- ...ts.locations.notebookRuntimeTemplates.html | 146 +- ...1.projects.locations.notebookRuntimes.html | 42 +- ....projects.locations.publishers.models.html | 64 +- ...1.projects.locations.reasoningEngines.html | 64 + ..._v1beta1.projects.locations.schedules.html | 102 - ....projects.locations.trainingPipelines.html | 16 +- ...v1beta1.projects.locations.tuningJobs.html | 1192 ++++++++++++ .../aiplatform_v1beta1.publishers.models.html | 8 +- ...properties.dataStreams.eventEditRules.html | 116 ++ ...sadmin_v1alpha.properties.dataStreams.html | 5 + ...properties.dataStreams.eventEditRules.html | 116 ++ ...csadmin_v1beta.properties.dataStreams.html | 5 + .../dyn/androidmanagement_v1.enterprises.html | 18 + ...oidmanagement_v1.enterprises.policies.html | 4 + ....projects.locations.managementServers.html | 6 + ...ryauthorization_v1.projects.attestors.html | 12 +- ...zation_v1.projects.platforms.policies.html | 12 +- docs/dyn/calendar_v3.events.html | 19 +- ...gement_v1.customers.telemetry.devices.html | 26 + ...nagement_v1.customers.telemetry.users.html | 26 + ...ild_v2.projects.locations.connections.html | 8 +- ...tions_v1.projects.locations.functions.html | 8 +- ...tions_v2.projects.locations.functions.html | 25 + ..._v2alpha.projects.locations.functions.html | 25 + ...s_v2beta.projects.locations.functions.html | 25 + docs/dyn/cloudsearch_v1.query.html | 404 ++++ docs/dyn/compute_alpha.networks.html | 4 + docs/dyn/compute_alpha.regionZones.html | 2 +- docs/dyn/compute_alpha.zones.html | 4 +- .../compute_beta.instanceGroupManagers.html | 37 + docs/dyn/compute_beta.instanceTemplates.html | 32 +- docs/dyn/compute_beta.instances.html | 56 +- docs/dyn/compute_beta.machineImages.html | 24 +- docs/dyn/compute_beta.networks.html | 4 +- ...pute_beta.regionInstanceGroupManagers.html | 31 + .../compute_beta.regionInstanceTemplates.html | 24 +- docs/dyn/compute_beta.regionInstances.html | 8 +- docs/dyn/compute_beta.regionZones.html | 2 +- docs/dyn/compute_beta.zones.html | 4 +- .../dyn/compute_v1.instanceGroupManagers.html | 10 + docs/dyn/compute_v1.instanceTemplates.html | 24 +- docs/dyn/compute_v1.instances.html | 42 +- docs/dyn/compute_v1.machineImages.html | 18 +- docs/dyn/compute_v1.networks.html | 4 +- ...ompute_v1.regionInstanceGroupManagers.html | 8 + .../compute_v1.regionInstanceTemplates.html | 18 +- docs/dyn/compute_v1.regionInstances.html | 6 +- .../compute_v1.regionTargetHttpsProxies.html | 4 + docs/dyn/compute_v1.regionZones.html | 2 +- docs/dyn/compute_v1.targetHttpsProxies.html | 5 + docs/dyn/compute_v1.zones.html | 4 +- ...ojects.locations.providers.connectors.html | 8 + ...nections.entityTypes.entitieswithacls.html | 148 ++ ...cts.locations.connections.entityTypes.html | 5 + ...ha1.projects.locations.contactCenters.html | 24 + ...s_v1.projects.locations.conversations.html | 672 +------ ...n_v1.projects.locations.migrationJobs.html | 5 + ...aplex_v1.projects.locations.dataScans.html | 32 +- ..._v1.projects.locations.dataScans.jobs.html | 14 +- docs/dyn/dataplex_v1.projects.locations.html | 2 +- ...astream_v1.projects.locations.streams.html | 16 + ...flow_v2.projects.conversationProfiles.html | 48 +- .../dialogflow_v2.projects.generators.html | 333 ++++ docs/dyn/dialogflow_v2.projects.html | 5 + ...ojects.locations.conversationProfiles.html | 48 +- ...flow_v2.projects.locations.generators.html | 577 ++++++ .../dyn/dialogflow_v2.projects.locations.html | 10 + ...rojects.locations.statelessSuggestion.html | 197 ++ ...low_v2.projects.locations.suggestions.html | 8 +- .../dialogflow_v2.projects.suggestions.html | 8 +- ...cts.agent.environments.users.sessions.html | 1 + ...gflow_v2beta1.projects.agent.sessions.html | 1 + ...v2beta1.projects.conversationProfiles.html | 48 +- ...1.projects.conversations.participants.html | 1 + ...ialogflow_v2beta1.projects.generators.html | 333 ++++ docs/dyn/dialogflow_v2beta1.projects.html | 5 + ...ons.agent.environments.users.sessions.html | 1 + ...ta1.projects.locations.agent.sessions.html | 1 + ...ojects.locations.conversationProfiles.html | 48 +- ....locations.conversations.participants.html | 1 + ...v2beta1.projects.locations.generators.html | 577 ++++++ ...dialogflow_v2beta1.projects.locations.html | 10 + ...rojects.locations.statelessSuggestion.html | 197 ++ ...2beta1.projects.locations.suggestions.html | 8 +- ...alogflow_v2beta1.projects.suggestions.html | 8 +- docs/dyn/discoveryengine_v1.projects.html | 46 + ...tions.collections.dataStores.controls.html | 482 +++++ ...ects.locations.collections.dataStores.html | 7 +- ...collections.dataStores.servingConfigs.html | 14 + ...llections.dataStores.sessions.answers.html | 6 + ...ons.collections.dataStores.userEvents.html | 11 +- ...ocations.collections.engines.controls.html | 482 +++++ ...rojects.locations.collections.engines.html | 15 +- ...ns.collections.engines.servingConfigs.html | 14 + ....collections.engines.sessions.answers.html | 6 + ...rojects.locations.dataStores.controls.html | 482 +++++ ...gine_v1.projects.locations.dataStores.html | 7 +- ...s.locations.dataStores.servingConfigs.html | 14 + ...locations.dataStores.sessions.answers.html | 6 + ...jects.locations.dataStores.userEvents.html | 11 +- ..._v1.projects.locations.rankingConfigs.html | 3 + ...gine_v1.projects.locations.userEvents.html | 9 +- ....dataStores.branches.documents.chunks.html | 4 +- ...tions.collections.dataStores.controls.html | 482 +++++ ....collections.dataStores.conversations.html | 2 +- ...s.collections.dataStores.customModels.html | 2 +- ...ects.locations.collections.dataStores.html | 22 +- ...ations.collections.dataStores.schemas.html | 12 + ...collections.dataStores.servingConfigs.html | 26 +- ...llections.dataStores.sessions.answers.html | 6 + ...ons.collections.dataStores.userEvents.html | 11 +- ...ocations.collections.engines.controls.html | 482 +++++ ...ons.collections.engines.conversations.html | 2 +- ...rojects.locations.collections.engines.html | 19 +- ...ns.collections.engines.servingConfigs.html | 26 +- ....collections.engines.sessions.answers.html | 6 + ....dataStores.branches.documents.chunks.html | 4 +- ...rojects.locations.dataStores.controls.html | 482 +++++ ...ts.locations.dataStores.conversations.html | 2 +- ...v1alpha.projects.locations.dataStores.html | 22 +- ...projects.locations.dataStores.schemas.html | 12 + ...s.locations.dataStores.servingConfigs.html | 26 +- ...locations.dataStores.sessions.answers.html | 6 + ...jects.locations.dataStores.userEvents.html | 11 +- ...pha.projects.locations.rankingConfigs.html | 3 + ...v1alpha.projects.locations.userEvents.html | 9 +- docs/dyn/discoveryengine_v1beta.projects.html | 46 + ...tions.collections.dataStores.controls.html | 482 +++++ ...s.collections.dataStores.customModels.html | 2 +- ...ects.locations.collections.dataStores.html | 7 +- ...collections.dataStores.servingConfigs.html | 14 + ...llections.dataStores.sessions.answers.html | 6 + ...ons.collections.dataStores.userEvents.html | 11 +- ...ocations.collections.engines.controls.html | 482 +++++ ...rojects.locations.collections.engines.html | 19 +- ...ns.collections.engines.servingConfigs.html | 14 + ....collections.engines.sessions.answers.html | 6 + ...rojects.locations.dataStores.controls.html | 482 +++++ ..._v1beta.projects.locations.dataStores.html | 7 +- ...s.locations.dataStores.servingConfigs.html | 14 + ...locations.dataStores.sessions.answers.html | 6 + ...jects.locations.dataStores.userEvents.html | 11 +- ...eta.projects.locations.rankingConfigs.html | 3 + ..._v1beta.projects.locations.userEvents.html | 9 +- ...displayvideo_v2.advertisers.creatives.html | 12 +- ...displayvideo_v3.advertisers.creatives.html | 12 +- ...ntai_v1.projects.locations.processors.html | 200 ++ ...ocations.processors.humanReviewConfig.html | 90 + ...ocations.processors.processorVersions.html | 198 ++ ...documentai_v1beta2.projects.documents.html | 90 + ..._v1beta2.projects.locations.documents.html | 90 + ...projects.locations.processors.dataset.html | 2 +- ...v1beta3.projects.locations.processors.html | 16 +- ...ocations.processors.humanReviewConfig.html | 4 +- ...ocations.processors.processorVersions.html | 12 +- ...ta1.projects.androidApps.deliveryData.html | 2 + docs/dyn/firebaseappcheck_v1.html | 5 + .../dyn/firebaseappcheck_v1.oauthClients.html | 215 +++ ...ck_v1.projects.apps.recaptchaV3Config.html | 4 +- ...baseappcheck_v1beta.projects.services.html | 8 +- ...ta.projects.services.resourcePolicies.html | 18 +- ...rojects.locations.datasets.fhirStores.html | 13 - ...am_v1.projects.locations.oauthClients.html | 32 +- docs/dyn/iap_v1.v1.html | 6 +- ...2.projects.defaultSupportedIdpConfigs.html | 12 +- ...ts.tenants.defaultSupportedIdpConfigs.html | 12 +- ...cts.locations.integrations.executions.html | 41 +- ...ns_v1.projects.locations.integrations.html | 32 + ...jects.locations.integrations.versions.html | 256 +++ ...ions.products.integrations.executions.html | 8 +- ...jects.locations.products.integrations.html | 32 + ...ations.products.integrations.versions.html | 256 +++ ...1.projects.locations.assetsExportJobs.html | 404 ++++ ...ioncenter_v1alpha1.projects.locations.html | 5 + ...ha1.projects.locations.preferenceSets.html | 8 +- ...jects.locations.reportConfigs.reports.html | 12 +- docs/dyn/monitoring_v3.uptimeCheckIps.html | 4 +- ...tions.global_.hubs.routeTables.routes.html | 32 + ...s.locations.global_.policyBasedRoutes.html | 28 +- ...jects.locations.serviceConnectionMaps.html | 12 + docs/dyn/policyanalyzer_v1.folders.html | 91 + ...rs.locations.activityTypes.activities.html | 141 ++ ...er_v1.folders.locations.activityTypes.html | 91 + .../policyanalyzer_v1.folders.locations.html | 91 + docs/dyn/policyanalyzer_v1.html | 10 + docs/dyn/policyanalyzer_v1.organizations.html | 91 + ...ns.locations.activityTypes.activities.html | 141 ++ ...organizations.locations.activityTypes.html | 91 + ...cyanalyzer_v1.organizations.locations.html | 91 + ...ts.locations.activityTypes.activities.html | 2 +- docs/dyn/policyanalyzer_v1beta1.folders.html | 91 + ...rs.locations.activityTypes.activities.html | 141 ++ ...beta1.folders.locations.activityTypes.html | 91 + ...icyanalyzer_v1beta1.folders.locations.html | 91 + docs/dyn/policyanalyzer_v1beta1.html | 10 + .../policyanalyzer_v1beta1.organizations.html | 91 + ...ns.locations.activityTypes.activities.html | 141 ++ ...organizations.locations.activityTypes.html | 91 + ...lyzer_v1beta1.organizations.locations.html | 91 + ...ts.locations.activityTypes.activities.html | 2 +- ...chaenterprise_v1.projects.assessments.html | 8 +- .../dyn/run_v1.namespaces.configurations.html | 4 +- docs/dyn/run_v1.namespaces.executions.html | 4 +- docs/dyn/run_v1.namespaces.jobs.html | 4 +- docs/dyn/run_v1.namespaces.revisions.html | 4 +- docs/dyn/run_v1.namespaces.routes.html | 4 +- docs/dyn/run_v1.namespaces.services.html | 4 +- ..._v1.projects.locations.configurations.html | 4 +- .../run_v1.projects.locations.revisions.html | 4 +- .../dyn/run_v1.projects.locations.routes.html | 4 +- .../run_v1.projects.locations.services.html | 4 +- ...v2.projects.locations.jobs.executions.html | 4 +- docs/dyn/run_v2.projects.locations.jobs.html | 8 +- .../run_v2.projects.locations.services.html | 4 +- ...projects.locations.services.revisions.html | 4 +- .../spanner_v1.projects.instanceConfigs.html | 4 + ...anner_v1.projects.instances.databases.html | 89 + ...projects.instances.databases.sessions.html | 32 +- docs/dyn/spanner_v1.projects.instances.html | 16 +- ...projects.instances.instancePartitions.html | 4 +- ....platforms.channels.versions.releases.html | 1 + ...ions.workflows.executions.stepEntries.html | 2 + .../documents/abusiveexperiencereport.v1.json | 2 +- .../acceleratedmobilepageurl.v1.json | 2 +- .../documents/accessapproval.v1.json | 2 +- .../documents/accesscontextmanager.v1.json | 2 +- .../discovery_cache/documents/acmedns.v1.json | 2 +- .../documents/addressvalidation.v1.json | 2 +- .../documents/adexchangebuyer2.v2beta1.json | 2 +- .../documents/adexperiencereport.v1.json | 2 +- .../documents/admin.datatransfer_v1.json | 2 +- .../documents/admin.directory_v1.json | 2 +- .../documents/admin.reports_v1.json | 2 +- .../discovery_cache/documents/admob.v1.json | 2 +- .../documents/admob.v1beta.json | 2 +- .../discovery_cache/documents/adsense.v2.json | 2 +- .../documents/advisorynotifications.v1.json | 2 +- .../documents/aiplatform.v1.json | 784 ++++++-- .../documents/aiplatform.v1beta1.json | 1299 ++++++++++--- .../documents/alertcenter.v1beta1.json | 2 +- .../documents/analyticsadmin.v1alpha.json | 45 +- .../documents/analyticsadmin.v1beta.json | 45 +- .../documents/analyticsdata.v1beta.json | 2 +- .../documents/analyticshub.v1.json | 2 +- .../documents/analyticshub.v1beta1.json | 2 +- .../androiddeviceprovisioning.v1.json | 2 +- .../documents/androidenterprise.v1.json | 2 +- .../documents/androidmanagement.v1.json | 54 +- .../documents/androidpublisher.v3.json | 2 +- .../documents/appengine.v1.json | 2 +- .../documents/appengine.v1alpha.json | 2 +- .../documents/appengine.v1beta.json | 2 +- .../documents/area120tables.v1alpha1.json | 2 +- .../authorizedbuyersmarketplace.v1.json | 2 +- .../documents/backupdr.v1.json | 12 +- .../discovery_cache/documents/batch.v1.json | 2 +- .../discovery_cache/documents/biglake.v1.json | 2 +- .../documents/bigquerydatapolicy.v1.json | 2 +- .../documents/bigtableadmin.v2.json | 2 +- .../documents/billingbudgets.v1.json | 2 +- .../documents/billingbudgets.v1beta1.json | 2 +- .../documents/binaryauthorization.v1.json | 6 +- .../binaryauthorization.v1beta1.json | 2 +- .../documents/blockchainnodeengine.v1.json | 2 +- .../discovery_cache/documents/blogger.v2.json | 2 +- .../discovery_cache/documents/blogger.v3.json | 2 +- .../discovery_cache/documents/books.v1.json | 2 +- .../businessprofileperformance.v1.json | 2 +- .../documents/calendar.v3.json | 10 +- .../documents/checks.v1alpha.json | 2 +- .../documents/chromemanagement.v1.json | 97 +- .../documents/chromepolicy.v1.json | 2 +- .../documents/chromeuxreport.v1.json | 2 +- .../documents/civicinfo.v2.json | 2 +- .../documents/classroom.v1.json | 2 +- .../documents/cloudasset.v1.json | 2 +- .../documents/cloudasset.v1beta1.json | 2 +- .../documents/cloudasset.v1p1beta1.json | 2 +- .../documents/cloudasset.v1p5beta1.json | 2 +- .../documents/cloudasset.v1p7beta1.json | 2 +- .../documents/cloudbilling.v1.json | 2 +- .../documents/cloudbilling.v1beta.json | 5 +- .../documents/cloudbuild.v1.json | 2 +- .../documents/cloudbuild.v2.json | 4 +- .../documents/cloudchannel.v1.json | 2 +- .../documents/clouddeploy.v1.json | 2 +- .../clouderrorreporting.v1beta1.json | 2 +- .../documents/cloudfunctions.v1.json | 4 +- .../documents/cloudfunctions.v2.json | 10 +- .../documents/cloudfunctions.v2alpha.json | 10 +- .../documents/cloudfunctions.v2beta.json | 10 +- .../documents/cloudidentity.v1.json | 2 +- .../documents/cloudidentity.v1beta1.json | 2 +- .../documents/cloudkms.v1.json | 2 +- .../documents/cloudresourcemanager.v1.json | 2 +- .../cloudresourcemanager.v1beta1.json | 2 +- .../documents/cloudresourcemanager.v2.json | 2 +- .../cloudresourcemanager.v2beta1.json | 2 +- .../documents/cloudresourcemanager.v3.json | 2 +- .../documents/cloudscheduler.v1.json | 2 +- .../documents/cloudscheduler.v1beta1.json | 2 +- .../documents/cloudsearch.v1.json | 42 +- .../documents/cloudshell.v1.json | 2 +- .../documents/cloudsupport.v2.json | 2 +- .../documents/cloudsupport.v2beta.json | 2 +- .../documents/compute.alpha.json | 8 +- .../documents/compute.beta.json | 54 +- .../discovery_cache/documents/compute.v1.json | 34 +- .../documents/connectors.v1.json | 15 +- .../documents/connectors.v2.json | 138 +- .../contactcenteraiplatform.v1alpha1.json | 9 +- .../documents/contactcenterinsights.v1.json | 810 +------- .../documents/container.v1.json | 2 +- .../documents/container.v1beta1.json | 2 +- .../documents/containeranalysis.v1.json | 4 +- .../documents/containeranalysis.v1alpha1.json | 4 +- .../documents/containeranalysis.v1beta1.json | 4 +- .../documents/content.v2.1.json | 2 +- .../documents/customsearch.v1.json | 2 +- .../documents/datamigration.v1.json | 12 +- .../documents/datamigration.v1beta1.json | 2 +- .../documents/datapipelines.v1.json | 2 +- .../documents/dataplex.v1.json | 35 +- .../documents/dataportability.v1.json | 2 +- .../documents/dataportability.v1beta.json | 2 +- .../documents/dataproc.v1.json | 2 +- .../documents/datastream.v1.json | 22 +- .../documents/developerconnect.v1.json | 2 +- .../documents/dialogflow.v2.json | 623 ++++++- .../documents/dialogflow.v2beta1.json | 628 ++++++- .../documents/dialogflow.v3.json | 2 +- .../documents/dialogflow.v3beta1.json | 2 +- .../documents/digitalassetlinks.v1.json | 2 +- .../documents/discoveryengine.v1.json | 1641 +++++++++++++++-- .../documents/discoveryengine.v1alpha.json | 1389 +++++++++++++- .../documents/discoveryengine.v1beta.json | 1623 ++++++++++++++-- .../documents/displayvideo.v2.json | 25 +- .../documents/displayvideo.v3.json | 25 +- .../discovery_cache/documents/dlp.v2.json | 4 +- .../discovery_cache/documents/dns.v1.json | 2 +- .../documents/dns.v1beta2.json | 2 +- .../discovery_cache/documents/docs.v1.json | 2 +- .../documents/documentai.v1.json | 1005 +++++++++- .../documents/documentai.v1beta2.json | 580 +++++- .../documents/documentai.v1beta3.json | 602 +++++- .../documents/domainsrdap.v1.json | 2 +- .../documents/doubleclickbidmanager.v2.json | 2 +- .../documents/doubleclicksearch.v2.json | 2 +- .../discovery_cache/documents/drive.v2.json | 2 +- .../discovery_cache/documents/drive.v3.json | 2 +- .../documents/driveactivity.v2.json | 2 +- .../documents/drivelabels.v2.json | 2 +- .../documents/drivelabels.v2beta.json | 2 +- .../documents/essentialcontacts.v1.json | 2 +- .../documents/eventarc.v1.json | 2 +- .../documents/factchecktools.v1alpha1.json | 2 +- .../discovery_cache/documents/fcm.v1.json | 2 +- .../documents/fcmdata.v1beta1.json | 12 +- .../discovery_cache/documents/file.v1.json | 2 +- .../documents/file.v1beta1.json | 2 +- .../documents/firebase.v1beta1.json | 2 +- .../documents/firebaseappcheck.v1.json | 126 +- .../documents/firebaseappcheck.v1beta.json | 14 +- .../documents/firebaseappdistribution.v1.json | 2 +- .../firebaseappdistribution.v1alpha.json | 2 +- .../documents/firebasedatabase.v1beta.json | 2 +- .../documents/firebasedynamiclinks.v1.json | 2 +- .../documents/firebasehosting.v1.json | 2 +- .../documents/firebasehosting.v1beta1.json | 2 +- .../documents/firebaseml.v1.json | 2 +- .../documents/firebaseml.v1beta2.json | 2 +- .../documents/firebaseml.v2beta.json | 2 +- .../documents/firebaserules.v1.json | 2 +- .../documents/firebasestorage.v1beta.json | 2 +- .../discovery_cache/documents/fitness.v1.json | 2 +- .../discovery_cache/documents/forms.v1.json | 2 +- .../discovery_cache/documents/gmail.v1.json | 2 +- .../documents/gmailpostmastertools.v1.json | 2 +- .../gmailpostmastertools.v1beta1.json | 2 +- .../documents/groupsmigration.v1.json | 2 +- .../documents/healthcare.v1.json | 2 +- .../documents/healthcare.v1beta1.json | 6 +- .../documents/homegraph.v1.json | 2 +- .../discovery_cache/documents/iam.v1.json | 6 +- .../discovery_cache/documents/iam.v2.json | 178 +- .../discovery_cache/documents/iam.v2beta.json | 178 +- .../documents/iamcredentials.v1.json | 2 +- .../discovery_cache/documents/iap.v1.json | 4 +- .../documents/iap.v1beta1.json | 2 +- .../documents/identitytoolkit.v1.json | 2 +- .../documents/identitytoolkit.v2.json | 5 +- .../documents/indexing.v3.json | 2 +- .../documents/integrations.v1.json | 117 +- .../discovery_cache/documents/keep.v1.json | 2 +- .../documents/kgsearch.v1.json | 2 +- .../documents/language.v1.json | 2 +- .../documents/language.v1beta2.json | 2 +- .../documents/language.v2.json | 2 +- .../documents/libraryagent.v1.json | 2 +- .../documents/licensing.v1.json | 2 +- .../documents/lifesciences.v2beta.json | 2 +- .../documents/localservices.v1.json | 2 +- .../discovery_cache/documents/logging.v2.json | 2 +- .../marketingplatformadmin.v1alpha.json | 4 +- .../documents/migrationcenter.v1.json | 2 +- .../documents/migrationcenter.v1alpha1.json | 367 +++- .../documents/monitoring.v1.json | 2 +- .../documents/monitoring.v3.json | 2 +- .../mybusinessaccountmanagement.v1.json | 2 +- .../mybusinessbusinessinformation.v1.json | 2 +- .../documents/mybusinesslodging.v1.json | 2 +- .../documents/mybusinessnotifications.v1.json | 2 +- .../documents/mybusinessplaceactions.v1.json | 2 +- .../documents/mybusinessqanda.v1.json | 2 +- .../documents/mybusinessverifications.v1.json | 2 +- .../documents/networkconnectivity.v1.json | 109 +- .../networkconnectivity.v1alpha1.json | 2 +- .../documents/networkmanagement.v1.json | 2 +- .../documents/networkmanagement.v1beta1.json | 2 +- .../documents/ondemandscanning.v1.json | 2 +- .../documents/ondemandscanning.v1beta1.json | 2 +- .../documents/orgpolicy.v2.json | 2 +- .../documents/osconfig.v1.json | 2 +- .../documents/osconfig.v1alpha.json | 2 +- .../documents/osconfig.v1beta.json | 2 +- .../documents/pagespeedonline.v5.json | 2 +- .../paymentsresellersubscription.v1.json | 2 +- .../discovery_cache/documents/people.v1.json | 2 +- .../discovery_cache/documents/places.v1.json | 2 +- .../documents/playcustomapp.v1.json | 2 +- .../playdeveloperreporting.v1alpha1.json | 2 +- .../playdeveloperreporting.v1beta1.json | 2 +- .../documents/playgrouping.v1alpha1.json | 2 +- .../documents/playintegrity.v1.json | 2 +- .../documents/policyanalyzer.v1.json | 117 +- .../documents/policyanalyzer.v1beta1.json | 117 +- .../documents/policysimulator.v1.json | 2 +- .../documents/policysimulator.v1alpha.json | 2 +- .../documents/policysimulator.v1beta.json | 2 +- .../documents/policytroubleshooter.v1.json | 2 +- .../policytroubleshooter.v1beta.json | 2 +- .../documents/prod_tt_sasportal.v1alpha1.json | 2 +- .../documents/publicca.v1.json | 2 +- .../documents/publicca.v1alpha1.json | 2 +- .../documents/publicca.v1beta1.json | 2 +- .../discovery_cache/documents/pubsub.v1.json | 2 +- .../documents/pubsub.v1beta1a.json | 2 +- .../documents/pubsub.v1beta2.json | 2 +- .../documents/pubsublite.v1.json | 2 +- .../readerrevenuesubscriptionlinking.v1.json | 2 +- .../documents/realtimebidding.v1.json | 2 +- .../documents/recaptchaenterprise.v1.json | 8 +- .../recommendationengine.v1beta1.json | 2 +- .../documents/reseller.v1.json | 2 +- .../documents/resourcesettings.v1.json | 14 +- .../discovery_cache/documents/run.v1.json | 54 +- .../discovery_cache/documents/run.v2.json | 46 +- .../discovery_cache/documents/script.v1.json | 2 +- .../documents/searchconsole.v1.json | 2 +- .../documents/secretmanager.v1.json | 57 +- .../documents/secretmanager.v1beta1.json | 57 +- .../documents/secretmanager.v1beta2.json | 57 +- .../documents/securitycenter.v1.json | 2 +- .../documents/securitycenter.v1beta1.json | 2 +- .../documents/securitycenter.v1beta2.json | 2 +- .../documents/servicecontrol.v1.json | 4 +- .../documents/servicecontrol.v2.json | 4 +- .../documents/servicedirectory.v1.json | 2 +- .../documents/servicedirectory.v1beta1.json | 2 +- .../documents/servicemanagement.v1.json | 2 +- .../documents/servicenetworking.v1.json | 4 +- .../documents/servicenetworking.v1beta.json | 4 +- .../discovery_cache/documents/sheets.v4.json | 2 +- .../discovery_cache/documents/slides.v1.json | 2 +- .../discovery_cache/documents/solar.v1.json | 2 +- .../discovery_cache/documents/spanner.v1.json | 173 +- .../discovery_cache/documents/speech.v1.json | 2 +- .../documents/speech.v1p1beta1.json | 2 +- .../documents/sqladmin.v1.json | 12 +- .../documents/sqladmin.v1beta4.json | 12 +- .../discovery_cache/documents/storage.v1.json | 4 +- .../documents/storagetransfer.v1.json | 2 +- .../documents/streetviewpublish.v1.json | 2 +- .../discovery_cache/documents/sts.v1.json | 2 +- .../discovery_cache/documents/sts.v1beta.json | 2 +- .../documents/tagmanager.v1.json | 2 +- .../documents/tagmanager.v2.json | 2 +- .../discovery_cache/documents/tasks.v1.json | 2 +- .../discovery_cache/documents/testing.v1.json | 2 +- .../documents/toolresults.v1beta3.json | 2 +- .../discovery_cache/documents/tpu.v1.json | 2 +- .../documents/tpu.v1alpha1.json | 2 +- .../discovery_cache/documents/tpu.v2.json | 2 +- .../documents/tpu.v2alpha1.json | 2 +- .../documents/travelimpactmodel.v1.json | 2 +- .../discovery_cache/documents/vault.v1.json | 2 +- .../documents/verifiedaccess.v1.json | 2 +- .../documents/verifiedaccess.v2.json | 2 +- .../documents/versionhistory.v1.json | 6 +- .../discovery_cache/documents/vision.v1.json | 2 +- .../documents/vision.v1p1beta1.json | 2 +- .../documents/vision.v1p2beta1.json | 2 +- .../documents/vmmigration.v1.json | 2 +- .../documents/vmwareengine.v1.json | 2 +- .../documents/walletobjects.v1.json | 2 +- .../documents/webfonts.v1.json | 2 +- .../discovery_cache/documents/webrisk.v1.json | 2 +- .../documents/websecurityscanner.v1.json | 2 +- .../documents/websecurityscanner.v1alpha.json | 2 +- .../documents/websecurityscanner.v1beta.json | 2 +- .../documents/workflowexecutions.v1.json | 7 +- .../documents/workflowexecutions.v1beta.json | 2 +- .../documents/workflows.v1.json | 2 +- .../documents/workflows.v1beta.json | 2 +- .../documents/workspaceevents.v1.json | 2 +- .../discovery_cache/documents/youtube.v3.json | 2 +- .../documents/youtubeAnalytics.v2.json | 2 +- .../documents/youtubereporting.v1.json | 2 +- 546 files changed, 29188 insertions(+), 3416 deletions(-) create mode 100644 docs/dyn/aiplatform_v1beta1.projects.locations.cachedContents.html create mode 100644 docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.eventEditRules.html create mode 100644 docs/dyn/analyticsadmin_v1beta.properties.dataStreams.eventEditRules.html create mode 100644 docs/dyn/connectors_v2.projects.locations.connections.entityTypes.entitieswithacls.html create mode 100644 docs/dyn/dialogflow_v2.projects.generators.html create mode 100644 docs/dyn/dialogflow_v2.projects.locations.generators.html create mode 100644 docs/dyn/dialogflow_v2.projects.locations.statelessSuggestion.html create mode 100644 docs/dyn/dialogflow_v2beta1.projects.generators.html create mode 100644 docs/dyn/dialogflow_v2beta1.projects.locations.generators.html create mode 100644 docs/dyn/dialogflow_v2beta1.projects.locations.statelessSuggestion.html create mode 100644 docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.controls.html create mode 100644 docs/dyn/discoveryengine_v1.projects.locations.collections.engines.controls.html create mode 100644 docs/dyn/discoveryengine_v1.projects.locations.dataStores.controls.html create mode 100644 docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.controls.html create mode 100644 docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.controls.html create mode 100644 docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.controls.html create mode 100644 docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.controls.html create mode 100644 docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.controls.html create mode 100644 docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.controls.html create mode 100644 docs/dyn/firebaseappcheck_v1.oauthClients.html create mode 100644 docs/dyn/migrationcenter_v1alpha1.projects.locations.assetsExportJobs.html create mode 100644 docs/dyn/policyanalyzer_v1.folders.html create mode 100644 docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.activities.html create mode 100644 docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.html create mode 100644 docs/dyn/policyanalyzer_v1.folders.locations.html create mode 100644 docs/dyn/policyanalyzer_v1.organizations.html create mode 100644 docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.activities.html create mode 100644 docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.html create mode 100644 docs/dyn/policyanalyzer_v1.organizations.locations.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.folders.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.activities.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.folders.locations.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.organizations.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.activities.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.html create mode 100644 docs/dyn/policyanalyzer_v1beta1.organizations.locations.html diff --git a/docs/dyn/aiplatform_v1.projects.locations.batchPredictionJobs.html b/docs/dyn/aiplatform_v1.projects.locations.batchPredictionJobs.html index 312da3ee01a..41141273d66 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.batchPredictionJobs.html +++ b/docs/dyn/aiplatform_v1.projects.locations.batchPredictionJobs.html @@ -360,7 +360,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -378,7 +378,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -630,7 +630,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -648,7 +648,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -942,7 +942,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -960,7 +960,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1225,7 +1225,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1243,7 +1243,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1.projects.locations.deploymentResourcePools.html b/docs/dyn/aiplatform_v1.projects.locations.deploymentResourcePools.html index 82c6f2f2ead..f8b91e54f00 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.deploymentResourcePools.html +++ b/docs/dyn/aiplatform_v1.projects.locations.deploymentResourcePools.html @@ -137,7 +137,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. }, "deploymentResourcePoolId": "A String", # Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name. The maximum length is 63 characters, and valid characters are `/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/`. } @@ -238,7 +243,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. } @@ -278,7 +288,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. }, ], "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages. diff --git a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html index d135c63b46c..0105d512177 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html +++ b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html @@ -1105,7 +1105,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -1152,6 +1179,14 @@

Method Details

], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, + "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request. + "functionCallingConfig": { # Function calling config. # Optional. Function calling config. + "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided. + "A String", + ], + "mode": "A String", # Optional. Function calling mode. + }, + }, "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -1188,6 +1223,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. @@ -2606,7 +2643,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -2653,6 +2717,14 @@

Method Details

], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, + "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request. + "functionCallingConfig": { # Function calling config. # Optional. Function calling config. + "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided. + "A String", + ], + "mode": "A String", # Optional. Function calling mode. + }, + }, "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -2689,6 +2761,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html index 4b267be91a4..868d40c56fa 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html +++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html @@ -310,6 +310,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, }, ], @@ -638,6 +646,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, }, ], diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html index 83acc451331..a970aab7af0 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html +++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.html @@ -132,6 +132,9 @@

Method Details

"dedicatedServingEndpoint": { # The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default. # Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint. "publicEndpointDomainName": "A String", # Output only. This field will be populated with the domain name to use for this FeatureOnlineStore }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -235,6 +238,9 @@

Method Details

"dedicatedServingEndpoint": { # The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default. # Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint. "publicEndpointDomainName": "A String", # Output only. This field will be populated with the domain name to use for this FeatureOnlineStore }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -279,6 +285,9 @@

Method Details

"dedicatedServingEndpoint": { # The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default. # Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint. "publicEndpointDomainName": "A String", # Output only. This field will be populated with the domain name to use for this FeatureOnlineStore }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -329,6 +338,9 @@

Method Details

"dedicatedServingEndpoint": { # The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default. # Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint. "publicEndpointDomainName": "A String", # Output only. This field will be populated with the domain name to use for this FeatureOnlineStore }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", diff --git a/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html b/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html index 9ea83dd651f..25458f5d4ef 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html +++ b/docs/dyn/aiplatform_v1.projects.locations.featurestores.entityTypes.html @@ -741,6 +741,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, "values": { # Container for list of values. # Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty. "values": [ # A list of feature values. All of them should be the same data type. @@ -773,6 +781,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, ], }, @@ -912,6 +928,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, "values": { # Container for list of values. # Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty. "values": [ # A list of feature values. All of them should be the same data type. @@ -944,6 +968,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, ], }, @@ -1027,6 +1059,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1FeatureValue # The value for this field. + }, + ], + }, }, }, }, diff --git a/docs/dyn/aiplatform_v1.projects.locations.metadataStores.html b/docs/dyn/aiplatform_v1.projects.locations.metadataStores.html index 0e698eeb169..400948e4c50 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.metadataStores.html +++ b/docs/dyn/aiplatform_v1.projects.locations.metadataStores.html @@ -134,6 +134,9 @@

Method Details

{ # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. @@ -227,6 +230,9 @@

Method Details

{ # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. @@ -259,6 +265,9 @@

Method Details

"metadataStores": [ # The MetadataStores found for the Location. { # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. diff --git a/docs/dyn/aiplatform_v1.projects.locations.models.html b/docs/dyn/aiplatform_v1.projects.locations.models.html index 69a718b2c4a..87b5514a5ec 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.models.html +++ b/docs/dyn/aiplatform_v1.projects.locations.models.html @@ -352,7 +352,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -370,7 +370,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -643,7 +643,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -661,7 +661,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -902,7 +902,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -920,7 +920,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1191,7 +1191,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1209,7 +1209,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1435,7 +1435,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1453,7 +1453,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1678,7 +1678,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1696,7 +1696,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -2060,7 +2060,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -2078,7 +2078,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimeTemplates.html b/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimeTemplates.html index 83eda7c6db2..9ed51ba0951 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimeTemplates.html +++ b/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimeTemplates.html @@ -95,6 +95,9 @@

Instance Methods

list_next()

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a NotebookRuntimeTemplate.

setIamPolicy(resource, body=None, x__xgafv=None)

Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.

@@ -124,6 +127,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -153,13 +159,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -254,6 +253,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -283,13 +285,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -363,6 +358,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -392,13 +390,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -423,6 +414,119 @@

Method Details

+
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a NotebookRuntimeTemplate.
+
+Args:
+  name: string, The resource name of the NotebookRuntimeTemplate. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
+  "createTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was created.
+  "dataPersistentDiskSpec": { # Represents the spec of persistent disk options. # Optional. The specification of persistent disk attached to the runtime as data disk storage.
+    "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB).
+    "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)
+  },
+  "description": "A String", # The description of the NotebookRuntimeTemplate.
+  "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.
+  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime.
+    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
+  },
+  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
+  "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate.
+    "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.
+    "eucDisabled": True or False, # Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.
+  },
+  "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.
+    "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.
+    "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.
+  },
+  "isDefault": True or False, # Output only. The default template to use if not specified.
+  "labels": { # The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
+    "a_key": "A String",
+  },
+  "machineSpec": { # Specification of a single machine. # Optional. Immutable. The specification of a single machine for the template.
+    "acceleratorCount": 42, # The number of accelerators to attach to the machine.
+    "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
+    "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
+    "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
+  },
+  "name": "A String", # The resource name of the NotebookRuntimeTemplate.
+  "networkSpec": { # Network spec. # Optional. Network spec.
+    "enableInternetAccess": True or False, # Whether to enable public internet access. Default false.
+    "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks)
+    "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}`
+  },
+  "networkTags": [ # Optional. The Compute Engine tags to add to runtime (see [Tagging instances](https://cloud.google.com/vpc/docs/add-remove-network-tags)).
+    "A String",
+  ],
+  "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template.
+  "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+  "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec.
+    "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
+  },
+  "updateTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.
+}
+
+  updateMask: string, Required. The update mask applies to the resource. For the `FieldMask` definition, see google.protobuf.FieldMask. Input format: `{paths: "${updated_filed}"}` Updatable fields: * `encryption_spec.kms_key_name`
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
+  "createTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was created.
+  "dataPersistentDiskSpec": { # Represents the spec of persistent disk options. # Optional. The specification of persistent disk attached to the runtime as data disk storage.
+    "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB).
+    "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)
+  },
+  "description": "A String", # The description of the NotebookRuntimeTemplate.
+  "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.
+  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime.
+    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
+  },
+  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
+  "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate.
+    "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.
+    "eucDisabled": True or False, # Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.
+  },
+  "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.
+    "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.
+    "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.
+  },
+  "isDefault": True or False, # Output only. The default template to use if not specified.
+  "labels": { # The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
+    "a_key": "A String",
+  },
+  "machineSpec": { # Specification of a single machine. # Optional. Immutable. The specification of a single machine for the template.
+    "acceleratorCount": 42, # The number of accelerators to attach to the machine.
+    "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
+    "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
+    "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
+  },
+  "name": "A String", # The resource name of the NotebookRuntimeTemplate.
+  "networkSpec": { # Network spec. # Optional. Network spec.
+    "enableInternetAccess": True or False, # Whether to enable public internet access. Default false.
+    "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks)
+    "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}`
+  },
+  "networkTags": [ # Optional. The Compute Engine tags to add to runtime (see [Tagging instances](https://cloud.google.com/vpc/docs/add-remove-network-tags)).
+    "A String",
+  ],
+  "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template.
+  "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+  "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec.
+    "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
+  },
+  "updateTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.
+}
+
+
setIamPolicy(resource, body=None, x__xgafv=None)
Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
diff --git a/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimes.html b/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimes.html
index ad194aae5d3..8c03d6565fe 100644
--- a/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimes.html
+++ b/docs/dyn/aiplatform_v1.projects.locations.notebookRuntimes.html
@@ -113,8 +113,15 @@ 

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -128,13 +135,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. @@ -234,8 +234,15 @@

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -249,13 +256,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. @@ -292,8 +292,15 @@

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -307,13 +314,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. diff --git a/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html index 6af3e0a394d..27a0ec6128c 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html +++ b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html @@ -163,9 +163,20 @@

Method Details

}, ], "resourceRuntime": { # Persistent Cluster runtime information as output # Output only. Runtime information of the Persistent Resource. + "accessUris": { # Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" } + "a_key": "A String", + }, }, "resourceRuntimeSpec": { # Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster. # Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration. "raySpec": { # Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes. # Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource. + "headNodeResourcePoolId": "A String", # Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set. + "imageUri": "A String", # Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from [Vertex prebuilt images](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field. + "rayMetricSpec": { # Configuration for the Ray metrics. # Optional. Ray metrics configurations. + "disabled": True or False, # Optional. Flag to disable the Ray metrics collection. + }, + "resourcePoolImages": { # Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" } + "a_key": "A String", + }, }, "serviceAccountSpec": { # Configuration for the use of custom service account to run the workloads. # Optional. Configure the use of workload identity on the PersistentResource "enableCustomServiceAccount": True or False, # Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the [Vertex AI Custom Code Service Agent](https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). @@ -301,9 +312,20 @@

Method Details

}, ], "resourceRuntime": { # Persistent Cluster runtime information as output # Output only. Runtime information of the Persistent Resource. + "accessUris": { # Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" } + "a_key": "A String", + }, }, "resourceRuntimeSpec": { # Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster. # Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration. "raySpec": { # Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes. # Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource. + "headNodeResourcePoolId": "A String", # Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set. + "imageUri": "A String", # Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from [Vertex prebuilt images](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field. + "rayMetricSpec": { # Configuration for the Ray metrics. # Optional. Ray metrics configurations. + "disabled": True or False, # Optional. Flag to disable the Ray metrics collection. + }, + "resourcePoolImages": { # Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" } + "a_key": "A String", + }, }, "serviceAccountSpec": { # Configuration for the use of custom service account to run the workloads. # Optional. Configure the use of workload identity on the PersistentResource "enableCustomServiceAccount": True or False, # Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the [Vertex AI Custom Code Service Agent](https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). @@ -380,9 +402,20 @@

Method Details

}, ], "resourceRuntime": { # Persistent Cluster runtime information as output # Output only. Runtime information of the Persistent Resource. + "accessUris": { # Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" } + "a_key": "A String", + }, }, "resourceRuntimeSpec": { # Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster. # Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration. "raySpec": { # Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes. # Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource. + "headNodeResourcePoolId": "A String", # Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set. + "imageUri": "A String", # Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from [Vertex prebuilt images](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field. + "rayMetricSpec": { # Configuration for the Ray metrics. # Optional. Ray metrics configurations. + "disabled": True or False, # Optional. Flag to disable the Ray metrics collection. + }, + "resourcePoolImages": { # Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" } + "a_key": "A String", + }, }, "serviceAccountSpec": { # Configuration for the use of custom service account to run the workloads. # Optional. Configure the use of workload identity on the PersistentResource "enableCustomServiceAccount": True or False, # Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the [Vertex AI Custom Code Service Agent](https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). @@ -465,9 +498,20 @@

Method Details

}, ], "resourceRuntime": { # Persistent Cluster runtime information as output # Output only. Runtime information of the Persistent Resource. + "accessUris": { # Output only. URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" } + "a_key": "A String", + }, }, "resourceRuntimeSpec": { # Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster. # Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration. "raySpec": { # Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes. # Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource. + "headNodeResourcePoolId": "A String", # Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set. + "imageUri": "A String", # Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from [Vertex prebuilt images](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field. + "rayMetricSpec": { # Configuration for the Ray metrics. # Optional. Ray metrics configurations. + "disabled": True or False, # Optional. Flag to disable the Ray metrics collection. + }, + "resourcePoolImages": { # Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" } + "a_key": "A String", + }, }, "serviceAccountSpec": { # Configuration for the use of custom service account to run the workloads. # Optional. Configure the use of workload identity on the PersistentResource "enableCustomServiceAccount": True or False, # Required. If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the [Vertex AI Custom Code Service Agent](https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). diff --git a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html index 4274fe9f508..4a50a55476b 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html +++ b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html @@ -258,7 +258,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -305,6 +332,14 @@

Method Details

], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, + "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request. + "functionCallingConfig": { # Function calling config. # Optional. Function calling config. + "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided. + "A String", + ], + "mode": "A String", # Optional. Function calling mode. + }, + }, "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -341,6 +376,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. @@ -770,7 +807,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -817,6 +881,14 @@

Method Details

], "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. }, + "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Tool config. This config is shared for all tools provided in the request. + "functionCallingConfig": { # Function calling config. # Optional. Function calling config. + "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided. + "A String", + ], + "mode": "A String", # Optional. Function calling mode. + }, + }, "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -853,6 +925,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. diff --git a/docs/dyn/aiplatform_v1.projects.locations.trainingPipelines.html b/docs/dyn/aiplatform_v1.projects.locations.trainingPipelines.html index 37145703de7..312429ac345 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.trainingPipelines.html +++ b/docs/dyn/aiplatform_v1.projects.locations.trainingPipelines.html @@ -227,7 +227,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -245,7 +245,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -536,7 +536,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -554,7 +554,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -887,7 +887,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -905,7 +905,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1210,7 +1210,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1228,7 +1228,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1.projects.locations.tuningJobs.html b/docs/dyn/aiplatform_v1.projects.locations.tuningJobs.html index 34bb882cdff..84551776481 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.tuningJobs.html +++ b/docs/dyn/aiplatform_v1.projects.locations.tuningJobs.html @@ -175,6 +175,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -314,6 +582,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -460,6 +996,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -612,6 +1416,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. diff --git a/docs/dyn/aiplatform_v1.publishers.models.html b/docs/dyn/aiplatform_v1.publishers.models.html index 083fd0a9f82..5a326d1bac6 100644 --- a/docs/dyn/aiplatform_v1.publishers.models.html +++ b/docs/dyn/aiplatform_v1.publishers.models.html @@ -161,7 +161,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -179,7 +179,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.batchPredictionJobs.html b/docs/dyn/aiplatform_v1beta1.projects.locations.batchPredictionJobs.html index 49a2e930ee5..07337c58301 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.batchPredictionJobs.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.batchPredictionJobs.html @@ -490,7 +490,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -508,7 +508,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -890,7 +890,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -908,7 +908,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1332,7 +1332,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1350,7 +1350,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1745,7 +1745,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1763,7 +1763,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.cachedContents.html b/docs/dyn/aiplatform_v1beta1.projects.locations.cachedContents.html new file mode 100644 index 00000000000..ee23f781c9c --- /dev/null +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.cachedContents.html @@ -0,0 +1,1259 @@ + + + +

Vertex AI API . projects . locations . cachedContents

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, x__xgafv=None)

+

Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.

+

+ delete(name, x__xgafv=None)

+

Deletes cached content

+

+ get(name, x__xgafv=None)

+

Gets cached content configurations

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists cached contents in a project

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates cached content configurations

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, x__xgafv=None) +
Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.
+
+Args:
+  parent: string, Required. The parent resource where the cached content will be created (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+  "contents": [ # Optional. Input only. Immutable. The content to cache
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "createTime": "A String", # Output only. Creatation time of the cache entry.
+  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+  "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+  "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+  "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+    "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+      { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+        "fileData": { # URI based data. # Optional. URI based data.
+          "fileUri": "A String", # Required. URI.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+          "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+            "a_key": "", # Properties of the object.
+          },
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+        },
+        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+          "response": { # Required. The function response in JSON object format.
+            "a_key": "", # Properties of the object.
+          },
+        },
+        "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+          "data": "A String", # Required. Raw bytes.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "text": "A String", # Optional. Text part (can be code).
+        "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+          "endOffset": "A String", # Optional. The end offset of the video.
+          "startOffset": "A String", # Optional. The start offset of the video.
+        },
+      },
+    ],
+    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+  },
+  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+        "A String",
+      ],
+      "mode": "A String", # Optional. Function calling mode.
+    },
+  },
+  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+          "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+          "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+        },
+      ],
+      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+      },
+      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+        "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+          "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+        },
+        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+          "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+            "A String",
+          ],
+          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+            { # The definition of the Rag resource.
+              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                "A String",
+              ],
+            },
+          ],
+          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+        },
+      },
+    },
+  ],
+  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+  "contents": [ # Optional. Input only. Immutable. The content to cache
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "createTime": "A String", # Output only. Creatation time of the cache entry.
+  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+  "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+  "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+  "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+    "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+      { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+        "fileData": { # URI based data. # Optional. URI based data.
+          "fileUri": "A String", # Required. URI.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+          "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+            "a_key": "", # Properties of the object.
+          },
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+        },
+        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+          "response": { # Required. The function response in JSON object format.
+            "a_key": "", # Properties of the object.
+          },
+        },
+        "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+          "data": "A String", # Required. Raw bytes.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "text": "A String", # Optional. Text part (can be code).
+        "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+          "endOffset": "A String", # Optional. The end offset of the video.
+          "startOffset": "A String", # Optional. The start offset of the video.
+        },
+      },
+    ],
+    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+  },
+  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+        "A String",
+      ],
+      "mode": "A String", # Optional. Function calling mode.
+    },
+  },
+  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+          "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+          "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+        },
+      ],
+      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+      },
+      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+        "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+          "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+        },
+        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+          "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+            "A String",
+          ],
+          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+            { # The definition of the Rag resource.
+              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                "A String",
+              ],
+            },
+          ],
+          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+        },
+      },
+    },
+  ],
+  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes cached content
+
+Args:
+  name: string, Required. The resource name referring to the cached content (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets cached content configurations
+
+Args:
+  name: string, Required. The resource name referring to the cached content (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+  "contents": [ # Optional. Input only. Immutable. The content to cache
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "createTime": "A String", # Output only. Creatation time of the cache entry.
+  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+  "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+  "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+  "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+    "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+      { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+        "fileData": { # URI based data. # Optional. URI based data.
+          "fileUri": "A String", # Required. URI.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+          "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+            "a_key": "", # Properties of the object.
+          },
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+        },
+        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+          "response": { # Required. The function response in JSON object format.
+            "a_key": "", # Properties of the object.
+          },
+        },
+        "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+          "data": "A String", # Required. Raw bytes.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "text": "A String", # Optional. Text part (can be code).
+        "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+          "endOffset": "A String", # Optional. The end offset of the video.
+          "startOffset": "A String", # Optional. The start offset of the video.
+        },
+      },
+    ],
+    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+  },
+  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+        "A String",
+      ],
+      "mode": "A String", # Optional. Function calling mode.
+    },
+  },
+  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+          "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+          "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+        },
+      ],
+      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+      },
+      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+        "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+          "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+        },
+        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+          "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+            "A String",
+          ],
+          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+            { # The definition of the Rag resource.
+              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                "A String",
+              ],
+            },
+          ],
+          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+        },
+      },
+    },
+  ],
+  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists cached contents in a project
+
+Args:
+  parent: string, Required. The parent, which owns this collection of cached contents. (required)
+  pageSize: integer, Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListCachedContents` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListCachedContents` must match the call that provided the page token.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response with a list of CachedContents.
+  "cachedContents": [ # List of cached contents.
+    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+      "contents": [ # Optional. Input only. Immutable. The content to cache
+        { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+          "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+            { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+              "fileData": { # URI based data. # Optional. URI based data.
+                "fileUri": "A String", # Required. URI.
+                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+              },
+              "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+                "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+                  "a_key": "", # Properties of the object.
+                },
+                "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+              },
+              "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+                "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+                "response": { # Required. The function response in JSON object format.
+                  "a_key": "", # Properties of the object.
+                },
+              },
+              "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+                "data": "A String", # Required. Raw bytes.
+                "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+              },
+              "text": "A String", # Optional. Text part (can be code).
+              "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+                "endOffset": "A String", # Optional. The end offset of the video.
+                "startOffset": "A String", # Optional. The start offset of the video.
+              },
+            },
+          ],
+          "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+        },
+      ],
+      "createTime": "A String", # Output only. Creatation time of the cache entry.
+      "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+      "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+      "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+      "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+        "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+          { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+            "fileData": { # URI based data. # Optional. URI based data.
+              "fileUri": "A String", # Required. URI.
+              "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+            },
+            "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+              "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+                "a_key": "", # Properties of the object.
+              },
+              "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+            },
+            "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+              "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+              "response": { # Required. The function response in JSON object format.
+                "a_key": "", # Properties of the object.
+              },
+            },
+            "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+              "data": "A String", # Required. Raw bytes.
+              "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+            },
+            "text": "A String", # Optional. Text part (can be code).
+            "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+              "endOffset": "A String", # Optional. The end offset of the video.
+              "startOffset": "A String", # Optional. The start offset of the video.
+            },
+          },
+        ],
+        "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+      },
+      "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+        "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+          "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+            "A String",
+          ],
+          "mode": "A String", # Optional. Function calling mode.
+        },
+      },
+      "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+        { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+          "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+            { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+              "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+              "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+              "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+                "default": "", # Optional. Default value of the data.
+                "description": "A String", # Optional. The description of the data.
+                "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+                  "A String",
+                ],
+                "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+                "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+                "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+                "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+                "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+                "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+                "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+                "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+                "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+                "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+                "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+                "nullable": True or False, # Optional. Indicates if the value may be null.
+                "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+                "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+                  "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+                },
+                "required": [ # Optional. Required properties of Type.OBJECT.
+                  "A String",
+                ],
+                "title": "A String", # Optional. The title of the Schema.
+                "type": "A String", # Optional. The type of the data.
+              },
+              "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+                "default": "", # Optional. Default value of the data.
+                "description": "A String", # Optional. The description of the data.
+                "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+                  "A String",
+                ],
+                "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+                "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+                "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+                "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+                "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+                "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+                "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+                "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+                "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+                "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+                "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+                "nullable": True or False, # Optional. Indicates if the value may be null.
+                "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+                "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+                  "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+                },
+                "required": [ # Optional. Required properties of Type.OBJECT.
+                  "A String",
+                ],
+                "title": "A String", # Optional. The title of the Schema.
+                "type": "A String", # Optional. The type of the data.
+              },
+            },
+          ],
+          "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+          },
+          "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+            "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+            "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+              "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+            },
+            "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+              "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+                "A String",
+              ],
+              "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+                { # The definition of the Rag resource.
+                  "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+                  "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                    "A String",
+                  ],
+                },
+              ],
+              "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+              "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+            },
+          },
+        },
+      ],
+      "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+      "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+    },
+  ],
+  "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates cached content configurations
+
+Args:
+  name: string, Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content} (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+  "contents": [ # Optional. Input only. Immutable. The content to cache
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "createTime": "A String", # Output only. Creatation time of the cache entry.
+  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+  "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+  "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+  "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+    "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+      { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+        "fileData": { # URI based data. # Optional. URI based data.
+          "fileUri": "A String", # Required. URI.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+          "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+            "a_key": "", # Properties of the object.
+          },
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+        },
+        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+          "response": { # Required. The function response in JSON object format.
+            "a_key": "", # Properties of the object.
+          },
+        },
+        "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+          "data": "A String", # Required. Raw bytes.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "text": "A String", # Optional. Text part (can be code).
+        "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+          "endOffset": "A String", # Optional. The end offset of the video.
+          "startOffset": "A String", # Optional. The start offset of the video.
+        },
+      },
+    ],
+    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+  },
+  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+        "A String",
+      ],
+      "mode": "A String", # Optional. Function calling mode.
+    },
+  },
+  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+          "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+          "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+        },
+      ],
+      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+      },
+      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+        "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+          "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+        },
+        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+          "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+            "A String",
+          ],
+          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+            { # The definition of the Rag resource.
+              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                "A String",
+              ],
+            },
+          ],
+          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+        },
+      },
+    },
+  ],
+  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+}
+
+  updateMask: string, Required. The list of fields to update.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A resource used in LLM queries for users to explicitly specify what to cache and how to cache.
+  "contents": [ # Optional. Input only. Immutable. The content to cache
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "createTime": "A String", # Output only. Creatation time of the cache entry.
+  "expireTime": "A String", # Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.
+  "model": "A String", # Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
+  "name": "A String", # Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}
+  "systemInstruction": { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. # Optional. Input only. Immutable. Developer set system instruction. Currently, text only
+    "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+      { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+        "fileData": { # URI based data. # Optional. URI based data.
+          "fileUri": "A String", # Required. URI.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+          "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+            "a_key": "", # Properties of the object.
+          },
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+        },
+        "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+          "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+          "response": { # Required. The function response in JSON object format.
+            "a_key": "", # Properties of the object.
+          },
+        },
+        "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data.
+          "data": "A String", # Required. Raw bytes.
+          "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+        },
+        "text": "A String", # Optional. Text part (can be code).
+        "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+          "endOffset": "A String", # Optional. The end offset of the video.
+          "startOffset": "A String", # Optional. The start offset of the video.
+        },
+      },
+    ],
+    "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+  },
+  "toolConfig": { # Tool config. This config is shared for all tools provided in the request. # Optional. Input only. Immutable. Tool config. This config is shared for all tools
+    "functionCallingConfig": { # Function calling config. # Optional. Function calling config.
+      "allowedFunctionNames": [ # Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.
+        "A String",
+      ],
+      "mode": "A String", # Optional. Function calling mode.
+    },
+  },
+  "tools": [ # Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response
+    { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).
+      "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided.
+        { # Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.
+          "description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function.
+          "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64.
+          "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+          "response": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function.
+            "default": "", # Optional. Default value of the data.
+            "description": "A String", # Optional. The description of the data.
+            "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]}
+              "A String",
+            ],
+            "example": "", # Optional. Example of the object. Will only populated when the object is the root.
+            "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc
+            "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY.
+            "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY.
+            "maxLength": "A String", # Optional. Maximum length of the Type.STRING
+            "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT.
+            "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER
+            "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY.
+            "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING
+            "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT.
+            "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER
+            "nullable": True or False, # Optional. Indicates if the value may be null.
+            "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression.
+            "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.
+              "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema
+            },
+            "required": [ # Optional. Required properties of Type.OBJECT.
+              "A String",
+            ],
+            "title": "A String", # Optional. The title of the Schema.
+            "type": "A String", # Optional. The type of the data.
+          },
+        },
+      ],
+      "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search.
+      },
+      "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation.
+        "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation.
+        "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search.
+          "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}`
+        },
+        "vertexRagStore": { # Retrieve from Vertex RAG Store for grounding. # Set to use data source powered by Vertex RAG store. User data is uploaded via the VertexRagDataService.
+          "ragCorpora": [ # Optional. Deprecated. Please use rag_resources instead.
+            "A String",
+          ],
+          "ragResources": [ # Optional. The representation of the rag source. It can be used to specify corpus only or ragfiles. Currently only support one corpus or multiple files from one corpus. In the future we may open up multiple corpora support.
+            { # The definition of the Rag resource.
+              "ragCorpus": "A String", # Optional. RagCorpora resource name. Format: `projects/{project}/locations/{location}/ragCorpora/{rag_corpus}`
+              "ragFileIds": [ # Optional. rag_file_id. The files should be in the same rag_corpus set in rag_corpus field.
+                "A String",
+              ],
+            },
+          ],
+          "similarityTopK": 42, # Optional. Number of top k results to return from the selected corpora.
+          "vectorDistanceThreshold": 3.14, # Optional. Only return results with vector distance smaller than the threshold.
+        },
+      },
+    },
+  ],
+  "ttl": "A String", # Input only. The TTL for this resource. The expiration time is computed: now + TTL.
+  "updateTime": "A String", # Output only. When the cache entry was last updated in UTC time.
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.deploymentResourcePools.html b/docs/dyn/aiplatform_v1beta1.projects.locations.deploymentResourcePools.html index 54ba0cd581b..66609557ec4 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.deploymentResourcePools.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.deploymentResourcePools.html @@ -137,7 +137,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. }, "deploymentResourcePoolId": "A String", # Required. The ID to use for the DeploymentResourcePool, which will become the final component of the DeploymentResourcePool's resource name. The maximum length is 63 characters, and valid characters are `/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/`. } @@ -238,7 +243,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. }
@@ -278,7 +288,12 @@

Method Details

"maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. }, + "disableContainerLogging": True or False, # If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "name": "A String", # Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "serviceAccount": "A String", # The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account. }, ], "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html index 911ca68a312..50820bc6cdb 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html @@ -74,6 +74,11 @@

Vertex AI API . projects . locations . endpoints

Instance Methods

+

+ chat() +

+

Returns the chat Resource.

+

operations()

@@ -1209,6 +1214,7 @@

Method Details

The object takes the form of: { # Request message for [PredictionService.GenerateContent]. + "cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}` "contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. @@ -1249,7 +1255,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -1368,6 +1401,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. @@ -2881,6 +2916,7 @@

Method Details

The object takes the form of: { # Request message for [PredictionService.GenerateContent]. + "cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}` "contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. @@ -2921,7 +2957,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -3040,6 +3103,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html b/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html index 8515632ee47..505db1d66ca 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html @@ -309,7 +309,7 @@

Method Details

"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, - "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. + "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains. "name": "A String", # Required. Extension name shown to the LLM. The name can be up to 128 characters long. }, "name": "A String", # Identifier. The resource name of the Extension. @@ -325,8 +325,8 @@

Method Details

"a_key": "", # Properties of the object. }, "vertexAiSearchRuntimeConfig": { # Runtime configuration for Vertext AI Search extension. - "appId": "A String", # Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time. - "servingConfigName": "A String", # [Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` + "engineId": "A String", # Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time. + "servingConfigName": "A String", # Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` }, }, "toolUseExamples": [ # Optional. Examples to illustrate the usage of the extension as a tool. @@ -457,7 +457,7 @@

Method Details

"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, - "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. + "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains. "name": "A String", # Required. Extension name shown to the LLM. The name can be up to 128 characters long. }, "name": "A String", # Identifier. The resource name of the Extension. @@ -473,8 +473,8 @@

Method Details

"a_key": "", # Properties of the object. }, "vertexAiSearchRuntimeConfig": { # Runtime configuration for Vertext AI Search extension. - "appId": "A String", # Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time. - "servingConfigName": "A String", # [Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` + "engineId": "A String", # Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time. + "servingConfigName": "A String", # Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` }, }, "toolUseExamples": [ # Optional. Examples to illustrate the usage of the extension as a tool. @@ -644,7 +644,7 @@

Method Details

"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, - "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. + "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains. "name": "A String", # Required. Extension name shown to the LLM. The name can be up to 128 characters long. }, "name": "A String", # Identifier. The resource name of the Extension. @@ -660,8 +660,8 @@

Method Details

"a_key": "", # Properties of the object. }, "vertexAiSearchRuntimeConfig": { # Runtime configuration for Vertext AI Search extension. - "appId": "A String", # Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time. - "servingConfigName": "A String", # [Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` + "engineId": "A String", # Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time. + "servingConfigName": "A String", # Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` }, }, "toolUseExamples": [ # Optional. Examples to illustrate the usage of the extension as a tool. @@ -809,7 +809,7 @@

Method Details

"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, - "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. + "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains. "name": "A String", # Required. Extension name shown to the LLM. The name can be up to 128 characters long. }, "name": "A String", # Identifier. The resource name of the Extension. @@ -825,8 +825,8 @@

Method Details

"a_key": "", # Properties of the object. }, "vertexAiSearchRuntimeConfig": { # Runtime configuration for Vertext AI Search extension. - "appId": "A String", # Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time. - "servingConfigName": "A String", # [Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` + "engineId": "A String", # Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time. + "servingConfigName": "A String", # Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` }, }, "toolUseExamples": [ # Optional. Examples to illustrate the usage of the extension as a tool. @@ -850,7 +850,7 @@

Method Details

"updateTime": "A String", # Output only. Timestamp when this Extension was most recently updated. } - updateMask: string, Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `tool_use_examples` + updateMask: string, Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `runtime_config` * `tool_use_examples` * `manifest.description` x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -956,7 +956,7 @@

Method Details

"serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, - "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. + "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains. "name": "A String", # Required. Extension name shown to the LLM. The name can be up to 128 characters long. }, "name": "A String", # Identifier. The resource name of the Extension. @@ -972,8 +972,8 @@

Method Details

"a_key": "", # Properties of the object. }, "vertexAiSearchRuntimeConfig": { # Runtime configuration for Vertext AI Search extension. - "appId": "A String", # Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time. - "servingConfigName": "A String", # [Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` + "engineId": "A String", # Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time. + "servingConfigName": "A String", # Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}` }, }, "toolUseExamples": [ # Optional. Examples to illustrate the usage of the extension as a tool. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html index d6d62813e82..3c2b95564d1 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html @@ -340,6 +340,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, }, ], @@ -751,6 +759,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, }, ], @@ -898,6 +914,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, }, ], diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html index 9e4508cf5e2..ab27f339004 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.html @@ -151,6 +151,9 @@

Method Details

"embeddingManagement": { # Deprecated: This sub message is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. Contains settings for embedding management. # Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. "enabled": True or False, # Optional. Immutable. Whether to enable embedding management in this FeatureOnlineStore. It's immutable after creation to ensure the FeatureOnlineStore availability. }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -264,6 +267,9 @@

Method Details

"embeddingManagement": { # Deprecated: This sub message is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. Contains settings for embedding management. # Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. "enabled": True or False, # Optional. Immutable. Whether to enable embedding management in this FeatureOnlineStore. It's immutable after creation to ensure the FeatureOnlineStore availability. }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -353,6 +359,9 @@

Method Details

"embeddingManagement": { # Deprecated: This sub message is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. Contains settings for embedding management. # Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. "enabled": True or False, # Optional. Immutable. Whether to enable embedding management in this FeatureOnlineStore. It's immutable after creation to ensure the FeatureOnlineStore availability. }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", @@ -413,6 +422,9 @@

Method Details

"embeddingManagement": { # Deprecated: This sub message is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. Contains settings for embedding management. # Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type. "enabled": True or False, # Optional. Immutable. Whether to enable embedding management in this FeatureOnlineStore. It's immutable after creation to ensure the FeatureOnlineStore availability. }, + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "labels": { # Optional. The labels with user-defined metadata to organize your FeatureOnlineStore. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information on and examples of labels. No more than 64 user labels can be associated with one FeatureOnlineStore(System labels are excluded)." System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. "a_key": "A String", diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html index edd4c93773e..15d3c5dfd39 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featurestores.entityTypes.html @@ -746,6 +746,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, "values": { # Container for list of values. # Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty. "values": [ # A list of feature values. All of them should be the same data type. @@ -778,6 +786,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, ], }, @@ -917,6 +933,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, "values": { # Container for list of values. # Feature values list if values, successive in time, are requested. If the requested number of values is greater than the number of existing Feature values, nonexistent values are omitted instead of being returned as empty. "values": [ # A list of feature values. All of them should be the same data type. @@ -949,6 +973,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, ], }, @@ -1032,6 +1064,14 @@

Method Details

], }, "stringValue": "A String", # String feature value. + "structValue": { # Struct (or object) type feature value. # A struct type feature value. + "values": [ # A list of field values. + { # One field of a Struct (or object) type feature value. + "name": "A String", # Name of the field in the struct feature. + "value": # Object with schema name: GoogleCloudAiplatformV1beta1FeatureValue # The value for this field. + }, + ], + }, }, }, }, diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.html b/docs/dyn/aiplatform_v1beta1.projects.locations.html index 7f10d808271..0212fd302e6 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.html @@ -89,6 +89,11 @@

Instance Methods

Returns the batchPredictionJobs Resource.

+

+ cachedContents() +

+

Returns the cachedContents Resource.

+

customJobs()

diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.metadataStores.html b/docs/dyn/aiplatform_v1beta1.projects.locations.metadataStores.html index 0ec4bc5c407..6667a821eca 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.metadataStores.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.metadataStores.html @@ -134,6 +134,9 @@

Method Details

{ # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. @@ -227,6 +230,9 @@

Method Details

{ # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. @@ -259,6 +265,9 @@

Method Details

"metadataStores": [ # The MetadataStores found for the Location. { # Instance of a metadata store. Contains a set of metadata that can be queried. "createTime": "A String", # Output only. Timestamp when this MetadataStore was created. + "dataplexConfig": { # Represents Dataplex integration settings. # Optional. Dataplex integration settings. + "enabledPipelinesLineage": True or False, # Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines. + }, "description": "A String", # Description of the MetadataStore. "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for a Metadata Store. If set, this Metadata Store and all sub-resources of this Metadata Store are secured using this key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.modelMonitors.html b/docs/dyn/aiplatform_v1beta1.projects.locations.modelMonitors.html index 1aa138b3431..73ea3b1fbd8 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.modelMonitors.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.modelMonitors.html @@ -1404,6 +1404,13 @@

Method Details

"pageSize": 42, # The standard list page size. "pageToken": "A String", # A page token received from a previous ModelMonitoringService.SearchModelMonitoringStats call. "statsFilter": { # Filter for searching ModelMonitoringStats. # Filter for search different stats. + "genAiStatsFilter": { # GenAi statistics filter. # GenAi statistics filter. + "clusterId": "A String", # From a particular cluster of monitoring results. + "modelMonitoringJob": "A String", # From a particular monitoring job. + "modelMonitoringSchedule": "A String", # From a particular monitoring schedule. + "objectiveType": "A String", # One of the supported monitoring objectives: `gen-ai-general` `gen-ai-evaluation` `gen-ai-safety` + "statsName": "A String", # If not specified, will return all the stats_names. + }, "tabularStatsFilter": { # Tabular statistics filter. # Tabular statistics filter. "algorithm": "A String", # Specify the algorithm type used for distance calculation, eg: jensen_shannon_divergence, l_infinity. "modelMonitoringJob": "A String", # From a particular monitoring job. @@ -1429,13 +1436,41 @@

Method Details

{ # Response message for ModelMonitoringService.SearchModelMonitoringStats. "monitoringStats": [ # Stats retrieved for requested objectives. { # Represents the collection of statistics for a metric. + "genAiStats": { # A collection of data points that describes the time-varying values of a gen ai metric. # Generated gen ai statistics. + "dataPoints": [ # The data points of this time series. When listing time series, points are returned in reverse time order. + { # Represents a single statistics data point. + "algorithm": "A String", # Algorithm used to calculated the metrics, eg: jensen_shannon_divergence, l_infinity. + "baselineStats": { # Typed value of the statistics. # Statistics from baseline dataset. + "distributionValue": { # Summary statistics for a population of values. # Distribution. + "distribution": "", # Predictive monitoring drift distribution in `tensorflow.metadata.v0.DatasetFeatureStatistics` format. + "distributionDeviation": 3.14, # Distribution distance deviation from the current dataset's statistics to baseline dataset's statistics. * For categorical feature, the distribution distance is calculated by L-inifinity norm or Jensen–Shannon divergence. * For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. + }, + "doubleValue": 3.14, # Double. + }, + "createTime": "A String", # Statistics create time. + "currentStats": { # Typed value of the statistics. # Statistics from current dataset. + "distributionValue": { # Summary statistics for a population of values. # Distribution. + "distribution": "", # Predictive monitoring drift distribution in `tensorflow.metadata.v0.DatasetFeatureStatistics` format. + "distributionDeviation": 3.14, # Distribution distance deviation from the current dataset's statistics to baseline dataset's statistics. * For categorical feature, the distribution distance is calculated by L-inifinity norm or Jensen–Shannon divergence. * For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. + }, + "doubleValue": 3.14, # Double. + }, + "hasAnomaly": True or False, # Indicate if the statistics has anomaly. + "modelMonitoringJob": "A String", # Model monitoring job resource name. + "schedule": "A String", # Schedule resource name. + "thresholdValue": 3.14, # Threshold value. + }, + ], + "objectiveType": "A String", # One of the supported monitoring objectives: `gen-ai-general` `gen-ai-evaluation` `gen-ai-safety` + "statsName": "A String", # The stats name. + }, "tabularStats": { # A collection of data points that describes the time-varying values of a tabular metric. # Generated tabular statistics. "dataPoints": [ # The data points of this time series. When listing time series, points are returned in reverse time order. { # Represents a single statistics data point. "algorithm": "A String", # Algorithm used to calculated the metrics, eg: jensen_shannon_divergence, l_infinity. "baselineStats": { # Typed value of the statistics. # Statistics from baseline dataset. "distributionValue": { # Summary statistics for a population of values. # Distribution. - "distribution": "", # tensorflow.metadata.v0.DatasetFeatureStatistics format. + "distribution": "", # Predictive monitoring drift distribution in `tensorflow.metadata.v0.DatasetFeatureStatistics` format. "distributionDeviation": 3.14, # Distribution distance deviation from the current dataset's statistics to baseline dataset's statistics. * For categorical feature, the distribution distance is calculated by L-inifinity norm or Jensen–Shannon divergence. * For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. }, "doubleValue": 3.14, # Double. @@ -1443,7 +1478,7 @@

Method Details

"createTime": "A String", # Statistics create time. "currentStats": { # Typed value of the statistics. # Statistics from current dataset. "distributionValue": { # Summary statistics for a population of values. # Distribution. - "distribution": "", # tensorflow.metadata.v0.DatasetFeatureStatistics format. + "distribution": "", # Predictive monitoring drift distribution in `tensorflow.metadata.v0.DatasetFeatureStatistics` format. "distributionDeviation": 3.14, # Distribution distance deviation from the current dataset's statistics to baseline dataset's statistics. * For categorical feature, the distribution distance is calculated by L-inifinity norm or Jensen–Shannon divergence. * For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. }, "doubleValue": 3.14, # Double. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.models.html b/docs/dyn/aiplatform_v1beta1.projects.locations.models.html index beaf8d6c863..7538e795a25 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.models.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.models.html @@ -352,7 +352,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -370,7 +370,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -638,7 +638,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -656,7 +656,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -893,7 +893,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -911,7 +911,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1178,7 +1178,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1196,7 +1196,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1418,7 +1418,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1436,7 +1436,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1657,7 +1657,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1675,7 +1675,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -2040,7 +2040,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -2058,7 +2058,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookExecutionJobs.html b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookExecutionJobs.html index e287ddb9501..df507d12117 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookExecutionJobs.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookExecutionJobs.html @@ -77,6 +77,9 @@

Instance Methods

close()

Close httplib2 connections.

+

+ create(parent, body=None, notebookExecutionJobId=None, x__xgafv=None)

+

Creates a NotebookExecutionJob.

delete(name, x__xgafv=None)

Deletes a NotebookExecutionJob.

@@ -101,6 +104,82 @@

Method Details

Close httplib2 connections.
+
+ create(parent, body=None, notebookExecutionJobId=None, x__xgafv=None) +
Creates a NotebookExecutionJob.
+
+Args:
+  parent: string, Required. The resource name of the Location to create the NotebookExecutionJob. Format: `projects/{project}/locations/{location}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # NotebookExecutionJob represents an instance of a notebook execution.
+  "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created.
+  "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository.
+    "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD.
+    "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}`
+  },
+  "directNotebookSource": { # The content of the input notebook in ipynb format. # The contents of an input notebook file.
+    "content": "A String", # The base64-encoded contents of the input notebook file.
+  },
+  "displayName": "A String", # The display name of the NotebookExecutionJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
+  "executionTimeout": "A String", # Max running time of the execution job in seconds (default 86400s / 24 hrs).
+  "executionUser": "A String", # The user email to run the execution as. Only supported by Colab runtimes.
+  "gcsNotebookSource": { # The Cloud Storage uri for the input notebook. # The Cloud Storage url pointing to the ipynb file. Format: `gs://bucket/notebook_file.ipynb`
+    "generation": "A String", # The version of the Cloud Storage object to read. If unset, the current version of the object is read. See https://cloud.google.com/storage/docs/metadata#generation-number.
+    "uri": "A String", # The Cloud Storage uri pointing to the ipynb file. Format: `gs://bucket/notebook_file.ipynb`
+  },
+  "gcsOutputUri": "A String", # The Cloud Storage location to upload the result to. Format: `gs://bucket-name`
+  "jobState": "A String", # Output only. The state of the NotebookExecutionJob.
+  "labels": { # The labels with user-defined metadata to organize NotebookExecutionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable.
+    "a_key": "A String",
+  },
+  "name": "A String", # Output only. The resource name of this NotebookExecutionJob. Format: `projects/{project_id}/locations/{location}/notebookExecutionJobs/{job_id}`
+  "notebookRuntimeTemplateResourceName": "A String", # The NotebookRuntimeTemplate to source compute configuration from.
+  "scheduleResourceName": "A String", # Output only. The Schedule resource name if this job is triggered by one. Format: `projects/{project_id}/locations/{location}/schedules/{schedule_id}`
+  "serviceAccount": "A String", # The service account to run the execution as.
+  "status": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Populated when the NotebookExecutionJob is completed. When there is an error during notebook execution, the error details are populated.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "updateTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was most recently updated.
+}
+
+  notebookExecutionJobId: string, Optional. User specified ID for the NotebookExecutionJob.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+
delete(name, x__xgafv=None)
Deletes a NotebookExecutionJob.
@@ -186,23 +265,6 @@ 

Method Details

{ # NotebookExecutionJob represents an instance of a notebook execution. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -267,23 +329,6 @@

Method Details

"notebookExecutionJobs": [ # List of NotebookExecutionJobs in the requested page. { # NotebookExecutionJob represents an instance of a notebook execution. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimeTemplates.html b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimeTemplates.html index e4cb277a091..d59005b3fe6 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimeTemplates.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimeTemplates.html @@ -95,6 +95,9 @@

Instance Methods

list_next()

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a NotebookRuntimeTemplate.

setIamPolicy(resource, body=None, x__xgafv=None)

Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.

@@ -124,6 +127,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -153,13 +159,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -254,6 +253,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -283,13 +285,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -363,6 +358,9 @@

Method Details

}, "description": "A String", # The description of the NotebookRuntimeTemplate. "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate. "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA. @@ -392,13 +390,6 @@

Method Details

"A String", ], "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity of the notebook runtime template. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used. "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec. "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. @@ -423,6 +414,119 @@

Method Details

+
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a NotebookRuntimeTemplate.
+
+Args:
+  name: string, The resource name of the NotebookRuntimeTemplate. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
+  "createTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was created.
+  "dataPersistentDiskSpec": { # Represents the spec of persistent disk options. # Optional. The specification of persistent disk attached to the runtime as data disk storage.
+    "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB).
+    "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)
+  },
+  "description": "A String", # The description of the NotebookRuntimeTemplate.
+  "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.
+  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime.
+    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
+  },
+  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
+  "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate.
+    "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.
+    "eucDisabled": True or False, # Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.
+  },
+  "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.
+    "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.
+    "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.
+  },
+  "isDefault": True or False, # Output only. The default template to use if not specified.
+  "labels": { # The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
+    "a_key": "A String",
+  },
+  "machineSpec": { # Specification of a single machine. # Optional. Immutable. The specification of a single machine for the template.
+    "acceleratorCount": 42, # The number of accelerators to attach to the machine.
+    "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
+    "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
+    "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
+  },
+  "name": "A String", # The resource name of the NotebookRuntimeTemplate.
+  "networkSpec": { # Network spec. # Optional. Network spec.
+    "enableInternetAccess": True or False, # Whether to enable public internet access. Default false.
+    "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks)
+    "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}`
+  },
+  "networkTags": [ # Optional. The Compute Engine tags to add to runtime (see [Tagging instances](https://cloud.google.com/vpc/docs/add-remove-network-tags)).
+    "A String",
+  ],
+  "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template.
+  "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+  "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec.
+    "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
+  },
+  "updateTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.
+}
+
+  updateMask: string, Required. The update mask applies to the resource. For the `FieldMask` definition, see google.protobuf.FieldMask. Input format: `{paths: "${updated_filed}"}` Updatable fields: * `encryption_spec.kms_key_name`
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.
+  "createTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was created.
+  "dataPersistentDiskSpec": { # Represents the spec of persistent disk options. # Optional. The specification of persistent disk attached to the runtime as data disk storage.
+    "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB).
+    "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk)
+  },
+  "description": "A String", # The description of the NotebookRuntimeTemplate.
+  "displayName": "A String", # Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.
+  "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key spec for the notebook runtime.
+    "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.
+  },
+  "etag": "A String", # Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
+  "eucConfig": { # The euc configuration of NotebookRuntimeTemplate. # EUC configuration of the NotebookRuntimeTemplate.
+    "bypassActasCheck": True or False, # Output only. Whether ActAs check is bypassed for service account attached to the VM. If false, we need ActAs check for the default Compute Engine Service account. When a Runtime is created, a VM is allocated using Default Compute Engine Service Account. Any user requesting to use this Runtime requires Service Account User (ActAs) permission over this SA. If true, Runtime owner is using EUC and does not require the above permission as VM no longer use default Compute Engine SA, but a P4SA.
+    "eucDisabled": True or False, # Input only. Whether EUC is disabled in this NotebookRuntimeTemplate. In proto3, the default value of a boolean is false. In this way, by default EUC will be enabled for NotebookRuntimeTemplate.
+  },
+  "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # The idle shutdown configuration of NotebookRuntimeTemplate. This config will only be set when idle shutdown is enabled.
+    "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate.
+    "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60.
+  },
+  "isDefault": True or False, # Output only. The default template to use if not specified.
+  "labels": { # The labels with user-defined metadata to organize the NotebookRuntimeTemplates. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
+    "a_key": "A String",
+  },
+  "machineSpec": { # Specification of a single machine. # Optional. Immutable. The specification of a single machine for the template.
+    "acceleratorCount": 42, # The number of accelerators to attach to the machine.
+    "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
+    "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
+    "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
+  },
+  "name": "A String", # The resource name of the NotebookRuntimeTemplate.
+  "networkSpec": { # Network spec. # Optional. Network spec.
+    "enableInternetAccess": True or False, # Whether to enable public internet access. Default false.
+    "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks)
+    "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}`
+  },
+  "networkTags": [ # Optional. The Compute Engine tags to add to runtime (see [Tagging instances](https://cloud.google.com/vpc/docs/add-remove-network-tags)).
+    "A String",
+  ],
+  "notebookRuntimeType": "A String", # Optional. Immutable. The type of the notebook runtime template.
+  "serviceAccount": "A String", # The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+  "shieldedVmConfig": { # A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm). # Optional. Immutable. Runtime Shielded VM spec.
+    "enableSecureBoot": True or False, # Defines whether the instance has [Secure Boot](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#secure-boot) enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
+  },
+  "updateTime": "A String", # Output only. Timestamp when this NotebookRuntimeTemplate was most recently updated.
+}
+
+
setIamPolicy(resource, body=None, x__xgafv=None)
Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimes.html b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimes.html
index c47321353de..ff8efaee4ca 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimes.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.notebookRuntimes.html
@@ -119,8 +119,15 @@ 

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -134,13 +141,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. @@ -269,8 +269,15 @@

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -284,13 +291,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. @@ -327,8 +327,15 @@

Method Details

"createTime": "A String", # Output only. Timestamp when this NotebookRuntime was created. "description": "A String", # The description of the NotebookRuntime. "displayName": "A String", # Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters. + "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Output only. Customer-managed encryption key spec for the notebook runtime. + "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. + }, "expirationTime": "A String", # Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade. "healthState": "A String", # Output only. The health state of the NotebookRuntime. + "idleShutdownConfig": { # The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field. # Output only. The idle shutdown configuration of the notebook runtime. + "idleShutdownDisabled": True or False, # Whether Idle Shutdown is disabled in this NotebookRuntimeTemplate. + "idleTimeout": "A String", # Required. Duration is accurate to the second. In Notebook, Idle Timeout is accurate to minute so the range of idle_timeout (second) is: 10 * 60 ~ 1440 * 60. + }, "isUpgradable": True or False, # Output only. Whether NotebookRuntime is upgradable. "labels": { # The labels with user-defined metadata to organize your NotebookRuntime. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one NotebookRuntime (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for NotebookRuntime: * "aiplatform.googleapis.com/notebook_runtime_gce_instance_id": output only, its value is the Compute Engine instance id. * "aiplatform.googleapis.com/colab_enterprise_entry_service": its value is either "bigquery" or "vertex"; if absent, it should be "vertex". This is to describe the entry service, either BigQuery or Vertex. "a_key": "A String", @@ -342,13 +349,6 @@

Method Details

}, "notebookRuntimeType": "A String", # Output only. The type of the notebook runtime. "proxyUri": "A String", # Output only. The proxy endpoint used to access the NotebookRuntime. - "reservationAffinity": { # Notebook Reservation Affinity for consuming Zonal reservation. # Output only. Reservation Affinity of the notebook runtime. - "consumeReservationType": "A String", # Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples. - "key": "A String", # Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value. - "values": [ # Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation. - "A String", - ], - }, "runtimeState": "A String", # Output only. The runtime (instance) state of the NotebookRuntime. "runtimeUser": "A String", # Required. The user email of the NotebookRuntime. "satisfiesPzi": True or False, # Output only. Reserved for future use. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html index 20cd2d3712e..cde13c36aee 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html @@ -218,6 +218,7 @@

Method Details

The object takes the form of: { # Request message for [PredictionService.GenerateContent]. + "cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}` "contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. @@ -258,7 +259,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -377,6 +405,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. @@ -819,6 +849,7 @@

Method Details

The object takes the form of: { # Request message for [PredictionService.GenerateContent]. + "cachedContent": "A String", # Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}` "contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. @@ -859,7 +890,34 @@

Method Details

"maxOutputTokens": 42, # Optional. The maximum number of output tokens to generate per message. "presencePenalty": 3.14, # Optional. Positive penalties. "responseMimeType": "A String", # Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature. - "responseStyle": "A String", # Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED + "responseSchema": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response. + "default": "", # Optional. Default value of the data. + "description": "A String", # Optional. The description of the data. + "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} + "A String", + ], + "example": "", # Optional. Example of the object. Will only populated when the object is the root. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER + "nullable": True or False, # Optional. Indicates if the value may be null. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. + "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema + }, + "required": [ # Optional. Required properties of Type.OBJECT. + "A String", + ], + "title": "A String", # Optional. The title of the Schema. + "type": "A String", # Optional. The type of the data. + }, "stopSequences": [ # Optional. Stop sequences. "A String", ], @@ -978,6 +1036,8 @@

Method Details

}, }, ], + "googleSearchRetrieval": { # Tool to retrieve public web data for grounding, powered by Google. # Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search. + }, "retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.reasoningEngines.html b/docs/dyn/aiplatform_v1beta1.projects.locations.reasoningEngines.html index b7501e96508..fa8937a2862 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.reasoningEngines.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.reasoningEngines.html @@ -97,6 +97,9 @@

Instance Methods

list_next()

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a reasoning engine.

query(name, body=None, x__xgafv=None)

Queries using a reasoning engine.

@@ -297,6 +300,67 @@

Method Details

+
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a reasoning engine.
+
+Args:
+  name: string, Identifier. The resource name of the ReasoningEngine. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # ReasoningEngine provides a customizable runtime for models to determine which actions to take and in which order.
+  "createTime": "A String", # Output only. Timestamp when this ReasoningEngine was created.
+  "description": "A String", # Optional. The description of the ReasoningEngine.
+  "displayName": "A String", # Required. The display name of the ReasoningEngine.
+  "etag": "A String", # Optional. Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
+  "name": "A String", # Identifier. The resource name of the ReasoningEngine.
+  "spec": { # ReasoningEngine configurations # Required. Configurations of the ReasoningEngine
+    "classMethods": [ # Optional. Declarations for object class methods.
+      {
+        "a_key": "", # Properties of the object.
+      },
+    ],
+    "packageSpec": { # User provided package spec like pickled object and package requirements. # Required. User provided package spec of the ReasoningEngine.
+      "dependencyFilesGcsUri": "A String", # Optional. The Cloud Storage URI of the dependency files in tar.gz format.
+      "pickleObjectGcsUri": "A String", # Optional. The Cloud Storage URI of the pickled python object.
+      "pythonVersion": "A String", # Optional. The Python version. Currently support 3.8, 3.9, 3.10, 3.11. If not specified, default value is 3.10.
+      "requirementsGcsUri": "A String", # Optional. The Cloud Storage URI of the `requirements.txt` file
+    },
+  },
+  "updateTime": "A String", # Output only. Timestamp when this ReasoningEngine was most recently updated.
+}
+
+  updateMask: string, Required. Mask specifying which fields to update.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+
query(name, body=None, x__xgafv=None)
Queries using a reasoning engine.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.schedules.html b/docs/dyn/aiplatform_v1beta1.projects.locations.schedules.html
index c55afbb193f..3b47bbc3a91 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.schedules.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.schedules.html
@@ -436,23 +436,6 @@ 

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -1052,23 +1035,6 @@

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -1710,23 +1676,6 @@

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -2340,23 +2289,6 @@

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -2974,23 +2906,6 @@

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` @@ -3591,23 +3506,6 @@

Method Details

"createNotebookExecutionJobRequest": { # Request message for [NotebookService.CreateNotebookExecutionJob] # Request for NotebookService.CreateNotebookExecutionJob. "notebookExecutionJob": { # NotebookExecutionJob represents an instance of a notebook execution. # Required. The NotebookExecutionJob to create. "createTime": "A String", # Output only. Timestamp when this NotebookExecutionJob was created. - "customEnvironmentSpec": { # Compute configuration to use for an execution job. # The custom compute configuration for an execution job. - "machineSpec": { # Specification of a single machine. # The specification of a single machine for the execution job. - "acceleratorCount": 42, # The number of accelerators to attach to the machine. - "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. - "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). - }, - "networkSpec": { # Network spec. # The network configuration to use for the execution job. - "enableInternetAccess": True or False, # Whether to enable public internet access. Default false. - "network": "A String", # The full name of the Google Compute Engine [network](https://cloud.google.com//compute/docs/networks-and-firewalls#networks) - "subnetwork": "A String", # The name of the subnet that this instance is in. Format: `projects/{project_id_or_number}/regions/{region}/subnetworks/{subnetwork_id}` - }, - "persistentDiskSpec": { # Represents the spec of persistent disk options. # The specification of a persistent disk to attach for the execution job. - "diskSizeGb": "A String", # Size in GB of the disk (default is 100GB). - "diskType": "A String", # Type of the disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) "pd-standard" (Persistent Disk Hard Disk Drive) "pd-balanced" (Balanced Persistent Disk) "pd-extreme" (Extreme Persistent Disk) - }, - }, "dataformRepositorySource": { # The Dataform Repository containing the input notebook. # The Dataform Repository pointing to a single file notebook repository. "commitSha": "A String", # The commit SHA to read repository with. If unset, the file will be read at HEAD. "dataformRepositoryResourceName": "A String", # The resource name of the Dataform Repository. Format: `projects/{project_id}/locations/{location}/repositories/{repository_id}` diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.trainingPipelines.html b/docs/dyn/aiplatform_v1beta1.projects.locations.trainingPipelines.html index 57650953c6f..d3f4b6c0810 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.trainingPipelines.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.trainingPipelines.html @@ -227,7 +227,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -245,7 +245,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -532,7 +532,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -550,7 +550,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -879,7 +879,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -897,7 +897,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1198,7 +1198,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -1216,7 +1216,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.tuningJobs.html b/docs/dyn/aiplatform_v1beta1.projects.locations.tuningJobs.html index 6dea72a0c3c..132e8d6bbaf 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.tuningJobs.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.tuningJobs.html @@ -135,6 +135,19 @@

Method Details

"baseModel": "A String", # The base model that is being tuned, e.g., "gemini-1.0-pro-002". "createTime": "A String", # Output only. Time when the TuningJob was created. "description": "A String", # Optional. The description of the TuningJob. + "distillationSpec": { # Tuning Spec for Distillation. # Tuning Spec for Distillation. + "baseTeacherModel": "A String", # The base teacher model that is being distilled, e.g., "gemini-1.0-pro-002". + "hyperParameters": { # Hyperparameters for Distillation. # Optional. Hyperparameters for Distillation. + "adapterSize": "A String", # Optional. Adapter size for distillation. + "epochCount": "A String", # Optional. Number of complete passes the model makes over the entire training dataset during training. + "learningRateMultiplier": 3.14, # Optional. Multiplier for adjusting the default learning rate. + }, + "pipelineRootDirectory": "A String", # Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts. + "studentModel": "A String", # The student model that is being tuned, e.g., "google/gemma-2b-it". + "trainingDatasetUri": "A String", # Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file. + "tunedTeacherModelSource": "A String", # The resource name of the Tuned teacher model. Format: `projects/{project}/locations/{location}/models/{model}`. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file. + }, "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. }, @@ -153,6 +166,23 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. Identifier. Resource name of a TuningJob. Format: `projects/{project}/locations/{location}/tuningJobs/{tuning_job}` + "pipelineJob": "A String", # Output only. The resource name of the PipelineJob associated with the TuningJob. Format: `projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`. + "reinforcementLearningSpec": { # Tuning Spec for Reinforcement Learning. # Tuning Spec for Reinforcement Learning. + "hyperParameters": { # Hyperparameters for Reinforcement Learning. # Optional. Additional hyper-parameters to use during tuning. + "epochCount": "A String", # Optional. Number of training epoches for the tuning job. + "humanFeedbackConfig": { # Configures Reinforcement Learning to use human feedback during tuning. # Configures Reinforcement Learning to use human feedback for preference data during tuning. + "preferenceDatasetUri": "A String", # Required. Cloud Storage path to human preference data. + }, + "klCoefficient": 3.14, # Optional. KL divergence coefficient for Reinforcement Learning. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for Reinforcement Learning. + "rewardModelTrainingConfig": { # Configures Reinforcement Learning to learn preference by training a reward model. # Configures Reinforcement Learning to train a reward model to learn preference. + "epochCount": "A String", # Optional. Number of training epoches for the reward model training job. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for reward model training. + }, + }, + "promptDatasetUri": "A String", # Required. Cloud Storage path to the prompt dataset to use during Reinforcement Learning. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to the validation dataset to use during Reinforcement Learning. + }, "startTime": "A String", # Output only. Time when the TuningJob for the first time entered the `JOB_STATE_RUNNING` state. "state": "A String", # Output only. The detailed state of the job. "supervisedTuningSpec": { # Tuning Spec for Supervised Tuning. # Tuning Spec for Supervised Fine Tuning. @@ -170,6 +200,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -274,6 +572,19 @@

Method Details

"baseModel": "A String", # The base model that is being tuned, e.g., "gemini-1.0-pro-002". "createTime": "A String", # Output only. Time when the TuningJob was created. "description": "A String", # Optional. The description of the TuningJob. + "distillationSpec": { # Tuning Spec for Distillation. # Tuning Spec for Distillation. + "baseTeacherModel": "A String", # The base teacher model that is being distilled, e.g., "gemini-1.0-pro-002". + "hyperParameters": { # Hyperparameters for Distillation. # Optional. Hyperparameters for Distillation. + "adapterSize": "A String", # Optional. Adapter size for distillation. + "epochCount": "A String", # Optional. Number of complete passes the model makes over the entire training dataset during training. + "learningRateMultiplier": 3.14, # Optional. Multiplier for adjusting the default learning rate. + }, + "pipelineRootDirectory": "A String", # Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts. + "studentModel": "A String", # The student model that is being tuned, e.g., "google/gemma-2b-it". + "trainingDatasetUri": "A String", # Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file. + "tunedTeacherModelSource": "A String", # The resource name of the Tuned teacher model. Format: `projects/{project}/locations/{location}/models/{model}`. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file. + }, "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. }, @@ -292,6 +603,23 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. Identifier. Resource name of a TuningJob. Format: `projects/{project}/locations/{location}/tuningJobs/{tuning_job}` + "pipelineJob": "A String", # Output only. The resource name of the PipelineJob associated with the TuningJob. Format: `projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`. + "reinforcementLearningSpec": { # Tuning Spec for Reinforcement Learning. # Tuning Spec for Reinforcement Learning. + "hyperParameters": { # Hyperparameters for Reinforcement Learning. # Optional. Additional hyper-parameters to use during tuning. + "epochCount": "A String", # Optional. Number of training epoches for the tuning job. + "humanFeedbackConfig": { # Configures Reinforcement Learning to use human feedback during tuning. # Configures Reinforcement Learning to use human feedback for preference data during tuning. + "preferenceDatasetUri": "A String", # Required. Cloud Storage path to human preference data. + }, + "klCoefficient": 3.14, # Optional. KL divergence coefficient for Reinforcement Learning. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for Reinforcement Learning. + "rewardModelTrainingConfig": { # Configures Reinforcement Learning to learn preference by training a reward model. # Configures Reinforcement Learning to train a reward model to learn preference. + "epochCount": "A String", # Optional. Number of training epoches for the reward model training job. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for reward model training. + }, + }, + "promptDatasetUri": "A String", # Required. Cloud Storage path to the prompt dataset to use during Reinforcement Learning. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to the validation dataset to use during Reinforcement Learning. + }, "startTime": "A String", # Output only. Time when the TuningJob for the first time entered the `JOB_STATE_RUNNING` state. "state": "A String", # Output only. The detailed state of the job. "supervisedTuningSpec": { # Tuning Spec for Supervised Tuning. # Tuning Spec for Supervised Fine Tuning. @@ -309,6 +637,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -420,6 +1016,19 @@

Method Details

"baseModel": "A String", # The base model that is being tuned, e.g., "gemini-1.0-pro-002". "createTime": "A String", # Output only. Time when the TuningJob was created. "description": "A String", # Optional. The description of the TuningJob. + "distillationSpec": { # Tuning Spec for Distillation. # Tuning Spec for Distillation. + "baseTeacherModel": "A String", # The base teacher model that is being distilled, e.g., "gemini-1.0-pro-002". + "hyperParameters": { # Hyperparameters for Distillation. # Optional. Hyperparameters for Distillation. + "adapterSize": "A String", # Optional. Adapter size for distillation. + "epochCount": "A String", # Optional. Number of complete passes the model makes over the entire training dataset during training. + "learningRateMultiplier": 3.14, # Optional. Multiplier for adjusting the default learning rate. + }, + "pipelineRootDirectory": "A String", # Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts. + "studentModel": "A String", # The student model that is being tuned, e.g., "google/gemma-2b-it". + "trainingDatasetUri": "A String", # Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file. + "tunedTeacherModelSource": "A String", # The resource name of the Tuned teacher model. Format: `projects/{project}/locations/{location}/models/{model}`. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file. + }, "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. }, @@ -438,6 +1047,23 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. Identifier. Resource name of a TuningJob. Format: `projects/{project}/locations/{location}/tuningJobs/{tuning_job}` + "pipelineJob": "A String", # Output only. The resource name of the PipelineJob associated with the TuningJob. Format: `projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`. + "reinforcementLearningSpec": { # Tuning Spec for Reinforcement Learning. # Tuning Spec for Reinforcement Learning. + "hyperParameters": { # Hyperparameters for Reinforcement Learning. # Optional. Additional hyper-parameters to use during tuning. + "epochCount": "A String", # Optional. Number of training epoches for the tuning job. + "humanFeedbackConfig": { # Configures Reinforcement Learning to use human feedback during tuning. # Configures Reinforcement Learning to use human feedback for preference data during tuning. + "preferenceDatasetUri": "A String", # Required. Cloud Storage path to human preference data. + }, + "klCoefficient": 3.14, # Optional. KL divergence coefficient for Reinforcement Learning. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for Reinforcement Learning. + "rewardModelTrainingConfig": { # Configures Reinforcement Learning to learn preference by training a reward model. # Configures Reinforcement Learning to train a reward model to learn preference. + "epochCount": "A String", # Optional. Number of training epoches for the reward model training job. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for reward model training. + }, + }, + "promptDatasetUri": "A String", # Required. Cloud Storage path to the prompt dataset to use during Reinforcement Learning. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to the validation dataset to use during Reinforcement Learning. + }, "startTime": "A String", # Output only. Time when the TuningJob for the first time entered the `JOB_STATE_RUNNING` state. "state": "A String", # Output only. The detailed state of the job. "supervisedTuningSpec": { # Tuning Spec for Supervised Tuning. # Tuning Spec for Supervised Fine Tuning. @@ -455,6 +1081,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. @@ -572,6 +1466,19 @@

Method Details

"baseModel": "A String", # The base model that is being tuned, e.g., "gemini-1.0-pro-002". "createTime": "A String", # Output only. Time when the TuningJob was created. "description": "A String", # Optional. The description of the TuningJob. + "distillationSpec": { # Tuning Spec for Distillation. # Tuning Spec for Distillation. + "baseTeacherModel": "A String", # The base teacher model that is being distilled, e.g., "gemini-1.0-pro-002". + "hyperParameters": { # Hyperparameters for Distillation. # Optional. Hyperparameters for Distillation. + "adapterSize": "A String", # Optional. Adapter size for distillation. + "epochCount": "A String", # Optional. Number of complete passes the model makes over the entire training dataset during training. + "learningRateMultiplier": 3.14, # Optional. Multiplier for adjusting the default learning rate. + }, + "pipelineRootDirectory": "A String", # Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts. + "studentModel": "A String", # The student model that is being tuned, e.g., "google/gemma-2b-it". + "trainingDatasetUri": "A String", # Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file. + "tunedTeacherModelSource": "A String", # The resource name of the Tuned teacher model. Format: `projects/{project}/locations/{location}/models/{model}`. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file. + }, "encryptionSpec": { # Represents a customer-managed encryption key spec that can be applied to a top-level resource. # Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key. "kmsKeyName": "A String", # Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created. }, @@ -590,6 +1497,23 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. Identifier. Resource name of a TuningJob. Format: `projects/{project}/locations/{location}/tuningJobs/{tuning_job}` + "pipelineJob": "A String", # Output only. The resource name of the PipelineJob associated with the TuningJob. Format: `projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`. + "reinforcementLearningSpec": { # Tuning Spec for Reinforcement Learning. # Tuning Spec for Reinforcement Learning. + "hyperParameters": { # Hyperparameters for Reinforcement Learning. # Optional. Additional hyper-parameters to use during tuning. + "epochCount": "A String", # Optional. Number of training epoches for the tuning job. + "humanFeedbackConfig": { # Configures Reinforcement Learning to use human feedback during tuning. # Configures Reinforcement Learning to use human feedback for preference data during tuning. + "preferenceDatasetUri": "A String", # Required. Cloud Storage path to human preference data. + }, + "klCoefficient": 3.14, # Optional. KL divergence coefficient for Reinforcement Learning. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for Reinforcement Learning. + "rewardModelTrainingConfig": { # Configures Reinforcement Learning to learn preference by training a reward model. # Configures Reinforcement Learning to train a reward model to learn preference. + "epochCount": "A String", # Optional. Number of training epoches for the reward model training job. + "learningRateMultiplier": 3.14, # Optional. Learning rate multiplier for reward model training. + }, + }, + "promptDatasetUri": "A String", # Required. Cloud Storage path to the prompt dataset to use during Reinforcement Learning. + "validationDatasetUri": "A String", # Optional. Cloud Storage path to the validation dataset to use during Reinforcement Learning. + }, "startTime": "A String", # Output only. Time when the TuningJob for the first time entered the `JOB_STATE_RUNNING` state. "state": "A String", # Output only. The detailed state of the job. "supervisedTuningSpec": { # Tuning Spec for Supervised Tuning. # Tuning Spec for Supervised Fine Tuning. @@ -607,6 +1531,274 @@

Method Details

}, "tunedModelDisplayName": "A String", # Optional. The display name of the TunedModel. The name can be up to 128 characters long and can consist of any UTF-8 characters. "tuningDataStats": { # The tuning data statistic values for TuningJob. # Output only. The tuning data statistics associated with this TuningJob. + "distillationDataStats": { # Statistics computed for datasets used for distillation. # Statistics for distillation. + "trainingDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the training dataset. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, + "reinforcementLearningDataStats": { # Statistics computed for datasets used for reinforcement learning. # Statistics for reinforcement learning. + "preferenceDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + "promptDatasetStats": { # Statistics computed over a tuning dataset. # Output only. Statistics computed for the prompt dataset used during reinforcement learning. + "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. + "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. + "tuningDatasetExampleCount": "A String", # Output only. Number of examples in the tuning dataset. + "tuningStepCount": "A String", # Output only. Number of tuning steps for this Tuning Job. + "userDatasetExamples": [ # Output only. Sample user messages in the training dataset uri. + { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. + "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. + { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. + "fileData": { # URI based data. # Optional. URI based data. + "fileUri": "A String", # Required. URI. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. + "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. + "a_key": "", # Properties of the object. + }, + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. + }, + "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. + "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. + "response": { # Required. The function response in JSON object format. + "a_key": "", # Properties of the object. + }, + }, + "inlineData": { # Content blob. It's preferred to send as text directly rather than raw bytes. # Optional. Inlined bytes data. + "data": "A String", # Required. Raw bytes. + "mimeType": "A String", # Required. The IANA standard MIME type of the source data. + }, + "text": "A String", # Optional. Text part (can be code). + "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. + "endOffset": "A String", # Optional. The end offset of the video. + "startOffset": "A String", # Optional. The start offset of the video. + }, + }, + ], + "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. + }, + ], + "userInputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user input tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userMessagePerExampleDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the messages per example. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + "userOutputTokenDistribution": { # Distribution computed over a tuning dataset. # Output only. Dataset distributions for the user output tokens. + "buckets": [ # Output only. Defines the histogram bucket. + { # Dataset bucket used to create a histogram for the distribution given a population of values. + "count": "A String", # Output only. Number of values in the bucket. + "left": 3.14, # Output only. Left bound of the bucket. + "right": 3.14, # Output only. Right bound of the bucket. + }, + ], + "max": 3.14, # Output only. The maximum of the population values. + "mean": 3.14, # Output only. The arithmetic mean of the values in the population. + "median": 3.14, # Output only. The median of the values in the population. + "min": 3.14, # Output only. The minimum of the population values. + "p5": 3.14, # Output only. The 5th percentile of the values in the population. + "p95": 3.14, # Output only. The 95th percentile of the values in the population. + "sum": 3.14, # Output only. Sum of a given population of values. + }, + }, + }, "supervisedTuningDataStats": { # Tuning data statistics for Supervised Tuning. # The SFT Tuning data stats. "totalBillableCharacterCount": "A String", # Output only. Number of billable characters in the tuning dataset. "totalTuningCharacterCount": "A String", # Output only. Number of tuning characters in the tuning dataset. diff --git a/docs/dyn/aiplatform_v1beta1.publishers.models.html b/docs/dyn/aiplatform_v1beta1.publishers.models.html index 9e78faa3509..9b1091ee5e2 100644 --- a/docs/dyn/aiplatform_v1beta1.publishers.models.html +++ b/docs/dyn/aiplatform_v1beta1.publishers.models.html @@ -176,7 +176,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -194,7 +194,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -487,7 +487,7 @@

Method Details

}, ], "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], @@ -505,7 +505,7 @@

Method Details

"predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. - "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "exec": { # ExecAction specifies a command to execute. # ExecAction probes the health of a container by executing a command. "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. "A String", ], diff --git a/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.eventEditRules.html b/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.eventEditRules.html new file mode 100644 index 00000000000..49160ce176f --- /dev/null +++ b/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.eventEditRules.html @@ -0,0 +1,116 @@ + + + +

Google Analytics Admin API . properties . dataStreams . eventEditRules

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ reorder(parent, body=None, x__xgafv=None)

+

Changes the processing order of event edit rules on the specified stream.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ reorder(parent, body=None, x__xgafv=None) +
Changes the processing order of event edit rules on the specified stream.
+
+Args:
+  parent: string, Required. Example format: properties/123/dataStreams/456 (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for ReorderEventEditRules RPC.
+  "eventEditRules": [ # Required. EventEditRule resource names for the specified data stream, in the needed processing order. All EventEditRules for the stream must be present in the list.
+    "A String",
+  ],
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.html b/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.html index 3beb3dcec17..23e2ba6562b 100644 --- a/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.html +++ b/docs/dyn/analyticsadmin_v1alpha.properties.dataStreams.html @@ -79,6 +79,11 @@

Instance Methods

Returns the eventCreateRules Resource.

+

+ eventEditRules() +

+

Returns the eventEditRules Resource.

+

measurementProtocolSecrets()

diff --git a/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.eventEditRules.html b/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.eventEditRules.html new file mode 100644 index 00000000000..5ecf0b96e50 --- /dev/null +++ b/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.eventEditRules.html @@ -0,0 +1,116 @@ + + + +

Google Analytics Admin API . properties . dataStreams . eventEditRules

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ reorder(parent, body=None, x__xgafv=None)

+

Changes the processing order of event edit rules on the specified stream.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ reorder(parent, body=None, x__xgafv=None) +
Changes the processing order of event edit rules on the specified stream.
+
+Args:
+  parent: string, Required. Example format: properties/123/dataStreams/456 (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for ReorderEventEditRules RPC.
+  "eventEditRules": [ # Required. EventEditRule resource names for the specified data stream, in the needed processing order. All EventEditRules for the stream must be present in the list.
+    "A String",
+  ],
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.html b/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.html index ffa13534241..b3a1636bb64 100644 --- a/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.html +++ b/docs/dyn/analyticsadmin_v1beta.properties.dataStreams.html @@ -74,6 +74,11 @@

Google Analytics Admin API . properties . dataStreams

Instance Methods

+

+ eventEditRules() +

+

Returns the eventEditRules Resource.

+

measurementProtocolSecrets()

diff --git a/docs/dyn/androidmanagement_v1.enterprises.html b/docs/dyn/androidmanagement_v1.enterprises.html index 6a3ab870e05..6f3a39da71d 100644 --- a/docs/dyn/androidmanagement_v1.enterprises.html +++ b/docs/dyn/androidmanagement_v1.enterprises.html @@ -159,6 +159,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. @@ -220,6 +223,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. @@ -302,6 +308,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. @@ -374,6 +383,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. @@ -450,6 +462,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. @@ -508,6 +523,9 @@

Method Details

"A String", ], "enterpriseDisplayName": "A String", # The name of the enterprise displayed to users. This field has a maximum length of 100 characters. + "googleAuthenticationSettings": { # Contains settings for Google-provided user authentication. # Settings for Google-provided user authentication. + "googleAuthenticationRequired": "A String", # Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url. + }, "logo": { # Data hosted at an external location. The data is to be downloaded by Android Device Policy and verified against the hash. # An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng. "sha256Hash": "A String", # The base-64 encoded SHA-256 hash of the content hosted at url. If the content doesn't match this hash, Android Device Policy won't use the data. "url": "A String", # The absolute URL to the data, which must use either the http or https scheme. Android Device Policy doesn't provide any credentials in the GET request, so the URL must be publicly accessible. Including a long, random component in the URL may be used to prevent attackers from discovering the URL. diff --git a/docs/dyn/androidmanagement_v1.enterprises.policies.html b/docs/dyn/androidmanagement_v1.enterprises.policies.html index d28a511df3f..b98b98a800c 100644 --- a/docs/dyn/androidmanagement_v1.enterprises.policies.html +++ b/docs/dyn/androidmanagement_v1.enterprises.policies.html @@ -201,6 +201,7 @@

Method Details

"policy": "A String", # The policy for granting the permission. }, ], + "userControlSettings": "A String", # Optional. Specifies whether user control is permitted for the app. User control includes user actions like force-stopping and clearing app data. Supported on Android 11 and above. "workProfileWidgets": "A String", # Specifies whether the app installed in the work profile is allowed to add widgets to the home screen. }, ], @@ -620,6 +621,7 @@

Method Details

"policy": "A String", # The policy for granting the permission. }, ], + "userControlSettings": "A String", # Optional. Specifies whether user control is permitted for the app. User control includes user actions like force-stopping and clearing app data. Supported on Android 11 and above. "workProfileWidgets": "A String", # Specifies whether the app installed in the work profile is allowed to add widgets to the home screen. }, ], @@ -1045,6 +1047,7 @@

Method Details

"policy": "A String", # The policy for granting the permission. }, ], + "userControlSettings": "A String", # Optional. Specifies whether user control is permitted for the app. User control includes user actions like force-stopping and clearing app data. Supported on Android 11 and above. "workProfileWidgets": "A String", # Specifies whether the app installed in the work profile is allowed to add widgets to the home screen. }, ], @@ -1453,6 +1456,7 @@

Method Details

"policy": "A String", # The policy for granting the permission. }, ], + "userControlSettings": "A String", # Optional. Specifies whether user control is permitted for the app. User control includes user actions like force-stopping and clearing app data. Supported on Android 11 and above. "workProfileWidgets": "A String", # Specifies whether the app installed in the work profile is allowed to add widgets to the home screen. }, ], diff --git a/docs/dyn/backupdr_v1.projects.locations.managementServers.html b/docs/dyn/backupdr_v1.projects.locations.managementServers.html index 1f2d71a56cd..acdb5d0c7d9 100644 --- a/docs/dyn/backupdr_v1.projects.locations.managementServers.html +++ b/docs/dyn/backupdr_v1.projects.locations.managementServers.html @@ -138,6 +138,8 @@

Method Details

}, ], "oauth2ClientId": "A String", # Output only. The OAuth 2.0 client id is required to make API calls to the BackupDR instance API of this ManagementServer. This is the value that should be provided in the ‘aud’ field of the OIDC ID Token (see openid specification https://openid.net/specs/openid-connect-core-1_0.html#IDToken). + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The ManagementServer state. "type": "A String", # Optional. The type of the ManagementServer resource. "updateTime": "A String", # Output only. The time when the instance was updated. @@ -254,6 +256,8 @@

Method Details

}, ], "oauth2ClientId": "A String", # Output only. The OAuth 2.0 client id is required to make API calls to the BackupDR instance API of this ManagementServer. This is the value that should be provided in the ‘aud’ field of the OIDC ID Token (see openid specification https://openid.net/specs/openid-connect-core-1_0.html#IDToken). + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The ManagementServer state. "type": "A String", # Optional. The type of the ManagementServer resource. "updateTime": "A String", # Output only. The time when the instance was updated. @@ -358,6 +362,8 @@

Method Details

}, ], "oauth2ClientId": "A String", # Output only. The OAuth 2.0 client id is required to make API calls to the BackupDR instance API of this ManagementServer. This is the value that should be provided in the ‘aud’ field of the OIDC ID Token (see openid specification https://openid.net/specs/openid-connect-core-1_0.html#IDToken). + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The ManagementServer state. "type": "A String", # Optional. The type of the ManagementServer resource. "updateTime": "A String", # Output only. The time when the instance was updated. diff --git a/docs/dyn/binaryauthorization_v1.projects.attestors.html b/docs/dyn/binaryauthorization_v1.projects.attestors.html index cae1ab19786..76fd2752913 100644 --- a/docs/dyn/binaryauthorization_v1.projects.attestors.html +++ b/docs/dyn/binaryauthorization_v1.projects.attestors.html @@ -129,7 +129,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. @@ -161,7 +161,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. @@ -217,7 +217,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. @@ -294,7 +294,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. @@ -432,7 +432,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. @@ -463,7 +463,7 @@

Method Details

"updateTime": "A String", # Output only. Time when the attestor was last updated. "userOwnedGrafeasNote": { # An user owned Grafeas note references a Grafeas Attestation.Authority Note created by the user. # This specifies how an attestation will be read, and how it will be used during policy enforcement. "delegationServiceAccountEmail": "A String", # Output only. This field will contain the service account email address that this attestor will use as the principal when querying Container Analysis. Attestor administrators must grant this service account the IAM role needed to read attestations from the note_reference in Container Analysis (`containeranalysis.notes.occurrences.viewer`). This email address is fixed for the lifetime of the attestor, but callers should not make any other assumptions about the service account email; future versions may use an email based on a different naming pattern. - "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. + "noteReference": "A String", # Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency. "publicKeys": [ # Optional. Public keys that verify attestations signed by this attestor. This field may be updated. If this field is non-empty, one of the specified public keys must verify that an attestation was signed by this attestor for the image specified in the admission request. If this field is empty, this attestor always returns that no valid attestations exist. { # An attestor public key that will be used to verify attestations signed by this attestor. "asciiArmoredPgpPublicKey": "A String", # ASCII-armored representation of a PGP public key, as the entire output by the command `gpg --export --armor foo@example.com` (either LF or CRLF line endings). When using this field, `id` should be left blank. The Binary Authorization API handlers will calculate the ID and fill it in automatically. Binary Authorization computes this ID as the OpenPGP RFC4880 V4 fingerprint, represented as upper-case hex. If `id` is provided by the caller, it will be overwritten by the API-calculated ID. diff --git a/docs/dyn/binaryauthorization_v1.projects.platforms.policies.html b/docs/dyn/binaryauthorization_v1.projects.platforms.policies.html index 876cde4d65e..e526d72aafb 100644 --- a/docs/dyn/binaryauthorization_v1.projects.platforms.policies.html +++ b/docs/dyn/binaryauthorization_v1.projects.platforms.policies.html @@ -156,7 +156,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, @@ -273,7 +273,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, @@ -414,7 +414,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, @@ -542,7 +542,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, @@ -676,7 +676,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, @@ -792,7 +792,7 @@

Method Details

}, }, ], - "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. + "containerAnalysisAttestationProjects": [ # Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10. "A String", ], }, diff --git a/docs/dyn/calendar_v3.events.html b/docs/dyn/calendar_v3.events.html index c94835b0ffb..f0fbbf739b9 100644 --- a/docs/dyn/calendar_v3.events.html +++ b/docs/dyn/calendar_v3.events.html @@ -103,7 +103,7 @@

Instance Methods

Retrieves the next page of results.

move(calendarId, eventId, destination, sendNotifications=None, sendUpdates=None)

-

Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime and workingLocation events cannot be moved.

+

Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime, workingLocation and fromGmail events cannot be moved.

patch(calendarId, eventId, alwaysIncludeEmail=None, body=None, conferenceDataVersion=None, maxAttendees=None, sendNotifications=None, sendUpdates=None, supportsAttachments=None)

Updates an event. This method supports patch semantics.

@@ -312,6 +312,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -607,6 +608,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -897,6 +899,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -1191,6 +1194,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -1490,6 +1494,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -1815,6 +1820,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -1975,6 +1981,7 @@

Method Details

Allowed values default - Regular events. focusTime - Focus time events. + fromGmail - Events from Gmail. outOfOffice - Out of office events. workingLocation - Working location events. iCalUID: string, Specifies an event ID in the iCalendar format to be provided in the response. Optional. Use this if you want to search for an event by its iCalendar ID. @@ -2204,6 +2211,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -2355,7 +2363,7 @@

Method Details

move(calendarId, eventId, destination, sendNotifications=None, sendUpdates=None) -
Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime and workingLocation events cannot be moved.
+  
Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime, workingLocation and fromGmail events cannot be moved.
 
 Args:
   calendarId: string, Calendar identifier of the source calendar where the event currently is on. (required)
@@ -2531,6 +2539,7 @@ 

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -2826,6 +2835,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -3126,6 +3136,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -3430,6 +3441,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -3725,6 +3737,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -4025,6 +4038,7 @@

Method Details

# - "outOfOffice" - An out-of-office event. # - "focusTime" - A focus-time event. # - "workingLocation" - A working location event. + # - "fromGmail" - An event from Gmail. This type of event cannot be created. "extendedProperties": { # Extended properties of the event. "private": { # Properties that are private to the copy of the event that appears on this calendar. "a_key": "A String", # The name of the private property and the corresponding value. @@ -4181,6 +4195,7 @@

Method Details

Allowed values default - Regular events. focusTime - Focus time events. + fromGmail - Events from Gmail. outOfOffice - Out of office events. workingLocation - Working location events. iCalUID: string, Specifies an event ID in the iCalendar format to be provided in the response. Optional. Use this if you want to search for an event by its iCalendar ID. diff --git a/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html b/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html index bb5e90d277e..680cdc4f1e5 100644 --- a/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html +++ b/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html @@ -108,6 +108,19 @@

Method Details

An object of the form: { # Telemetry data collected from a managed device. * Granular permission needed: TELEMETRY_API_DEVICE + "appReport": [ # Output only. App reports collected periodically sorted in a decreasing order of report_time. + { # App report. + "reportTime": "A String", # Timestamp when the report was collected. + "usageData": [ # App usage data. + { # App usage data. + "appId": "A String", # App id. + "appInstanceId": "A String", # Application instance id. This will be unique per window/instance. + "appType": "A String", # Type of app. + "runningDuration": "A String", # App foreground running time. + }, + ], + }, + ], "audioStatusReport": [ # Output only. Audio reports collected periodically sorted in a decreasing order of report_time. { # Status data for storage. * This field is telemetry information and this will change over time as the device is utilized. * Data for this field is controlled via policy: [ReportDeviceAudioStatus](https://chromeenterprise.google/policies/#ReportDeviceAudioStatus) * Data Collection Frequency: 10 minutes * Default Data Reporting Frequency: 3 hours - Policy Controlled: Yes * Cache: If the device is offline, the collected data is stored locally, and will be reported when the device is next online: No * Reported for affiliated users only: N/A * Granular permission needed: TELEMETRY_API_AUDIO_REPORT "inputDevice": "A String", # Output only. Active input device's name. @@ -413,6 +426,19 @@

Method Details

{ "devices": [ # Telemetry devices returned in the response. { # Telemetry data collected from a managed device. * Granular permission needed: TELEMETRY_API_DEVICE + "appReport": [ # Output only. App reports collected periodically sorted in a decreasing order of report_time. + { # App report. + "reportTime": "A String", # Timestamp when the report was collected. + "usageData": [ # App usage data. + { # App usage data. + "appId": "A String", # App id. + "appInstanceId": "A String", # Application instance id. This will be unique per window/instance. + "appType": "A String", # Type of app. + "runningDuration": "A String", # App foreground running time. + }, + ], + }, + ], "audioStatusReport": [ # Output only. Audio reports collected periodically sorted in a decreasing order of report_time. { # Status data for storage. * This field is telemetry information and this will change over time as the device is utilized. * Data for this field is controlled via policy: [ReportDeviceAudioStatus](https://chromeenterprise.google/policies/#ReportDeviceAudioStatus) * Data Collection Frequency: 10 minutes * Default Data Reporting Frequency: 3 hours - Policy Controlled: Yes * Cache: If the device is offline, the collected data is stored locally, and will be reported when the device is next online: No * Reported for affiliated users only: N/A * Granular permission needed: TELEMETRY_API_AUDIO_REPORT "inputDevice": "A String", # Output only. Active input device's name. diff --git a/docs/dyn/chromemanagement_v1.customers.telemetry.users.html b/docs/dyn/chromemanagement_v1.customers.telemetry.users.html index 20d1cf0acf6..60ac3d0e83b 100644 --- a/docs/dyn/chromemanagement_v1.customers.telemetry.users.html +++ b/docs/dyn/chromemanagement_v1.customers.telemetry.users.html @@ -113,6 +113,19 @@

Method Details

"orgUnitId": "A String", # Organization unit of the user. "userDevice": [ # Telemetry data collected from a managed user and device. { # Telemetry data collected for a managed user and device. * Granular permission needed: TELEMETRY_API_DEVICE + "appReport": [ # Output only. App reports collected periodically sorted in a decreasing order of report_time. + { # App report. + "reportTime": "A String", # Timestamp when the report was collected. + "usageData": [ # App usage data. + { # App usage data. + "appId": "A String", # App id. + "appInstanceId": "A String", # Application instance id. This will be unique per window/instance. + "appType": "A String", # Type of app. + "runningDuration": "A String", # App foreground running time. + }, + ], + }, + ], "audioStatusReport": [ # Output only. Audio reports collected periodically sorted in a decreasing order of report_time. { # Status data for storage. * This field is telemetry information and this will change over time as the device is utilized. * Data for this field is controlled via policy: [ReportDeviceAudioStatus](https://chromeenterprise.google/policies/#ReportDeviceAudioStatus) * Data Collection Frequency: 10 minutes * Default Data Reporting Frequency: 3 hours - Policy Controlled: Yes * Cache: If the device is offline, the collected data is stored locally, and will be reported when the device is next online: No * Reported for affiliated users only: N/A * Granular permission needed: TELEMETRY_API_AUDIO_REPORT "inputDevice": "A String", # Output only. Active input device's name. @@ -190,6 +203,19 @@

Method Details

"orgUnitId": "A String", # Organization unit of the user. "userDevice": [ # Telemetry data collected from a managed user and device. { # Telemetry data collected for a managed user and device. * Granular permission needed: TELEMETRY_API_DEVICE + "appReport": [ # Output only. App reports collected periodically sorted in a decreasing order of report_time. + { # App report. + "reportTime": "A String", # Timestamp when the report was collected. + "usageData": [ # App usage data. + { # App usage data. + "appId": "A String", # App id. + "appInstanceId": "A String", # Application instance id. This will be unique per window/instance. + "appType": "A String", # Type of app. + "runningDuration": "A String", # App foreground running time. + }, + ], + }, + ], "audioStatusReport": [ # Output only. Audio reports collected periodically sorted in a decreasing order of report_time. { # Status data for storage. * This field is telemetry information and this will change over time as the device is utilized. * Data for this field is controlled via policy: [ReportDeviceAudioStatus](https://chromeenterprise.google/policies/#ReportDeviceAudioStatus) * Data Collection Frequency: 10 minutes * Default Data Reporting Frequency: 3 hours - Policy Controlled: Yes * Cache: If the device is offline, the collected data is stored locally, and will be reported when the device is next online: No * Reported for affiliated users only: N/A * Granular permission needed: TELEMETRY_API_AUDIO_REPORT "inputDevice": "A String", # Output only. Active input device's name. diff --git a/docs/dyn/cloudbuild_v2.projects.locations.connections.html b/docs/dyn/cloudbuild_v2.projects.locations.connections.html index 037f62f3642..57dee575083 100644 --- a/docs/dyn/cloudbuild_v2.projects.locations.connections.html +++ b/docs/dyn/cloudbuild_v2.projects.locations.connections.html @@ -154,7 +154,7 @@

Method Details

"userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. }, - "hostUri": "A String", # Optional. The URI of the Bitbucket Data Center instance or cluster this connection is for. + "hostUri": "A String", # Required. The URI of the Bitbucket Data Center instance or cluster this connection is for. "readAuthorizerCredential": { # Represents a personal access token that authorized the Connection, and associated metadata. # Required. A http access token with the `REPO_READ` access. "userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. @@ -367,7 +367,7 @@

Method Details

"userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. }, - "hostUri": "A String", # Optional. The URI of the Bitbucket Data Center instance or cluster this connection is for. + "hostUri": "A String", # Required. The URI of the Bitbucket Data Center instance or cluster this connection is for. "readAuthorizerCredential": { # Represents a personal access token that authorized the Connection, and associated metadata. # Required. A http access token with the `REPO_READ` access. "userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. @@ -518,7 +518,7 @@

Method Details

"userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. }, - "hostUri": "A String", # Optional. The URI of the Bitbucket Data Center instance or cluster this connection is for. + "hostUri": "A String", # Required. The URI of the Bitbucket Data Center instance or cluster this connection is for. "readAuthorizerCredential": { # Represents a personal access token that authorized the Connection, and associated metadata. # Required. A http access token with the `REPO_READ` access. "userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. @@ -629,7 +629,7 @@

Method Details

"userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. }, - "hostUri": "A String", # Optional. The URI of the Bitbucket Data Center instance or cluster this connection is for. + "hostUri": "A String", # Required. The URI of the Bitbucket Data Center instance or cluster this connection is for. "readAuthorizerCredential": { # Represents a personal access token that authorized the Connection, and associated metadata. # Required. A http access token with the `REPO_READ` access. "userTokenSecretVersion": "A String", # Required. A SecretManager resource containing the user token that authorizes the Cloud Build connection. Format: `projects/*/secrets/*/versions/*`. "username": "A String", # Output only. The username associated to this token. diff --git a/docs/dyn/cloudfunctions_v1.projects.locations.functions.html b/docs/dyn/cloudfunctions_v1.projects.locations.functions.html index 2e8cc0319ff..5a3b311fea1 100644 --- a/docs/dyn/cloudfunctions_v1.projects.locations.functions.html +++ b/docs/dyn/cloudfunctions_v1.projects.locations.functions.html @@ -197,7 +197,7 @@

Method Details

"name": "A String", # A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*` "network": "A String", # Deprecated: use vpc_connector "onDeployUpdatePolicy": { # Security patches are only applied when a function is redeployed. - "runtimeVersion": "A String", # Output only. contains the runtime version which was used during latest function deployment. + "runtimeVersion": "A String", # Output only. Contains the runtime version which was used during latest function deployment. }, "runtime": "A String", # The runtime in which to run the function. Required when deploying a new function, optional when updating an existing function. For a complete list of possible choices, see the [`gcloud` command reference](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--runtime). "secretEnvironmentVariables": [ # Secret environment variables configuration. @@ -409,7 +409,7 @@

Method Details

"name": "A String", # A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*` "network": "A String", # Deprecated: use vpc_connector "onDeployUpdatePolicy": { # Security patches are only applied when a function is redeployed. - "runtimeVersion": "A String", # Output only. contains the runtime version which was used during latest function deployment. + "runtimeVersion": "A String", # Output only. Contains the runtime version which was used during latest function deployment. }, "runtime": "A String", # The runtime in which to run the function. Required when deploying a new function, optional when updating an existing function. For a complete list of possible choices, see the [`gcloud` command reference](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--runtime). "secretEnvironmentVariables": [ # Secret environment variables configuration. @@ -557,7 +557,7 @@

Method Details

"name": "A String", # A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*` "network": "A String", # Deprecated: use vpc_connector "onDeployUpdatePolicy": { # Security patches are only applied when a function is redeployed. - "runtimeVersion": "A String", # Output only. contains the runtime version which was used during latest function deployment. + "runtimeVersion": "A String", # Output only. Contains the runtime version which was used during latest function deployment. }, "runtime": "A String", # The runtime in which to run the function. Required when deploying a new function, optional when updating an existing function. For a complete list of possible choices, see the [`gcloud` command reference](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--runtime). "secretEnvironmentVariables": [ # Secret environment variables configuration. @@ -668,7 +668,7 @@

Method Details

"name": "A String", # A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*` "network": "A String", # Deprecated: use vpc_connector "onDeployUpdatePolicy": { # Security patches are only applied when a function is redeployed. - "runtimeVersion": "A String", # Output only. contains the runtime version which was used during latest function deployment. + "runtimeVersion": "A String", # Output only. Contains the runtime version which was used during latest function deployment. }, "runtime": "A String", # The runtime in which to run the function. Required when deploying a new function, optional when updating an existing function. For a complete list of possible choices, see the [`gcloud` command reference](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--runtime). "secretEnvironmentVariables": [ # Secret environment variables configuration. diff --git a/docs/dyn/cloudfunctions_v2.projects.locations.functions.html b/docs/dyn/cloudfunctions_v2.projects.locations.functions.html index 5794af50749..16f4d335eb9 100644 --- a/docs/dyn/cloudfunctions_v2.projects.locations.functions.html +++ b/docs/dyn/cloudfunctions_v2.projects.locations.functions.html @@ -252,6 +252,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -268,6 +269,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -303,6 +305,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -379,6 +382,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -395,6 +399,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -421,6 +426,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -580,6 +586,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, "uploadUrl": "A String", # The generated Google Cloud Storage signed URL that should be used for a function source code upload. The uploaded file should be a zip archive which contains a function. }
@@ -630,6 +637,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -646,6 +654,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -681,6 +690,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -757,6 +767,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -773,6 +784,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -799,6 +811,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -940,6 +953,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -956,6 +970,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -991,6 +1006,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1067,6 +1083,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1083,6 +1100,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1109,6 +1127,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1211,6 +1230,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1227,6 +1247,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1262,6 +1283,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1338,6 +1360,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1354,6 +1377,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1380,6 +1404,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, diff --git a/docs/dyn/cloudfunctions_v2alpha.projects.locations.functions.html b/docs/dyn/cloudfunctions_v2alpha.projects.locations.functions.html index deb195ce3b3..0d9d5ad849d 100644 --- a/docs/dyn/cloudfunctions_v2alpha.projects.locations.functions.html +++ b/docs/dyn/cloudfunctions_v2alpha.projects.locations.functions.html @@ -252,6 +252,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -268,6 +269,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -303,6 +305,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -379,6 +382,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -395,6 +399,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -421,6 +426,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -580,6 +586,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, "uploadUrl": "A String", # The generated Google Cloud Storage signed URL that should be used for a function source code upload. The uploaded file should be a zip archive which contains a function. }
@@ -630,6 +637,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -646,6 +654,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -681,6 +690,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -757,6 +767,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -773,6 +784,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -799,6 +811,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -940,6 +953,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -956,6 +970,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -991,6 +1006,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1067,6 +1083,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1083,6 +1100,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1109,6 +1127,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1211,6 +1230,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1227,6 +1247,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1262,6 +1283,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1338,6 +1360,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1354,6 +1377,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1380,6 +1404,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, diff --git a/docs/dyn/cloudfunctions_v2beta.projects.locations.functions.html b/docs/dyn/cloudfunctions_v2beta.projects.locations.functions.html index ed64b11b3f4..c0fa2fe4342 100644 --- a/docs/dyn/cloudfunctions_v2beta.projects.locations.functions.html +++ b/docs/dyn/cloudfunctions_v2beta.projects.locations.functions.html @@ -252,6 +252,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -268,6 +269,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -303,6 +305,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -379,6 +382,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -395,6 +399,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -421,6 +426,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -580,6 +586,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, "uploadUrl": "A String", # The generated Google Cloud Storage signed URL that should be used for a function source code upload. The uploaded file should be a zip archive which contains a function. }
@@ -630,6 +637,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -646,6 +654,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -681,6 +690,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -757,6 +767,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -773,6 +784,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -799,6 +811,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -940,6 +953,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -956,6 +970,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -991,6 +1006,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1067,6 +1083,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1083,6 +1100,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1109,6 +1127,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1211,6 +1230,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1227,6 +1247,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1262,6 +1283,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, @@ -1338,6 +1360,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceProvenance": { # Provenance of the source. Ways to find the original source, or verify that some source was used for this build. # Output only. A permanent fixed identifier for source. @@ -1354,6 +1377,7 @@

Method Details

"bucket": "A String", # Google Cloud Storage bucket containing the source (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)). "generation": "A String", # Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used. "object": "A String", # Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build. + "sourceUploadUrl": "A String", # When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment. }, }, "sourceToken": "A String", # An identifier for Firebase function sources. Disclaimer: This field is only supported for Firebase function deployments. @@ -1380,6 +1404,7 @@

Method Details

"allTrafficOnLatestRevision": True or False, # Whether 100% of traffic is routed to the latest revision. On CreateFunction and UpdateFunction, when set to true, the revision being deployed will serve 100% of traffic, ignoring any traffic split settings, if any. On GetFunction, true will be returned if the latest revision is serving 100% of traffic. "availableCpu": "A String", # The number of CPUs used in a single container instance. Default value is calculated from available memory. Supports the same values as Cloud Run, see https://cloud.google.com/run/docs/reference/rest/v1/Container#resourcerequirements Example: "1" indicates 1 vCPU "availableMemory": "A String", # The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description. + "binaryAuthorizationPolicy": "A String", # Optional. The binary authorization policy to be checked when deploying the Cloud Run service. "environmentVariables": { # Environment variables that shall be available during function execution. "a_key": "A String", }, diff --git a/docs/dyn/cloudsearch_v1.query.html b/docs/dyn/cloudsearch_v1.query.html index 30b53f7fc22..1b0674d3e49 100644 --- a/docs/dyn/cloudsearch_v1.query.html +++ b/docs/dyn/cloudsearch_v1.query.html @@ -82,6 +82,9 @@

Instance Methods

close()

Close httplib2 connections.

+

+ debugSearch(body=None, x__xgafv=None)

+

Returns Debug information for Cloud Search Query API provides the search method. **Note:** This API requires a standard end user account to execute. A service account can't perform Query API requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).

removeActivity(body=None, x__xgafv=None)

Provides functionality to remove logged activity for a user. Currently to be used only for Chat 1p clients **Note:** This API requires a standard end user account to execute. A service account can't perform Remove Activity requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).

@@ -97,6 +100,407 @@

Method Details

Close httplib2 connections.
+
+ debugSearch(body=None, x__xgafv=None) +
Returns Debug information for Cloud Search Query API provides the search method. **Note:** This API requires a standard end user account to execute. A service account can't perform Query API requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).
+
+Args:
+  body: object, The request body.
+    The object takes the form of:
+
+{ # The search API request.
+  "contextAttributes": [ # Context attributes for the request which will be used to adjust ranking of search results. The maximum number of elements is 10.
+    { # A named attribute associated with an item which can be used for influencing the ranking of the item based on the context in the request.
+      "name": "A String", # The name of the attribute. It should not be empty. The maximum length is 32 characters. The name must start with a letter and can only contain letters (A-Z, a-z) or numbers (0-9). The name will be normalized (lower-cased) before being matched.
+      "values": [ # Text values of the attribute. The maximum number of elements is 10. The maximum length of an element in the array is 32 characters. The value will be normalized (lower-cased) before being matched.
+        "A String",
+      ],
+    },
+  ],
+  "dataSourceRestrictions": [ # The sources to use for querying. If not specified, all data sources from the current search application are used.
+    { # Restriction on Datasource.
+      "filterOptions": [ # Filter options restricting the results. If multiple filters are present, they are grouped by object type before joining. Filters with the same object type are joined conjunctively, then the resulting expressions are joined disjunctively. The maximum number of elements is 20. NOTE: Suggest API supports only few filters at the moment: "objecttype", "type" and "mimetype". For now, schema specific filters cannot be used to filter suggestions.
+        { # Filter options to be applied on query.
+          "filter": { # A generic way of expressing filters in a query, which supports two approaches: **1. Setting a ValueFilter.** The name must match an operator_name defined in the schema for your data source. **2. Setting a CompositeFilter.** The filters are evaluated using the logical operator. The top-level operators can only be either an AND or a NOT. AND can appear only at the top-most level. OR can appear only under a top-level AND. # Generic filter to restrict the search, such as `lang:en`, `site:xyz`.
+            "compositeFilter": {
+              "logicOperator": "A String", # The logic operator of the sub filter.
+              "subFilters": [ # Sub filters.
+                # Object with schema name: Filter
+              ],
+            },
+            "valueFilter": {
+              "operatorName": "A String", # The `operator_name` applied to the query, such as *price_greater_than*. The filter can work against both types of filters defined in the schema for your data source: 1. `operator_name`, where the query filters results by the property that matches the value. 2. `greater_than_operator_name` or `less_than_operator_name` in your schema. The query filters the results for the property values that are greater than or less than the supplied value in the query.
+              "value": { # Definition of a single value with generic type. # The value to be compared with.
+                "booleanValue": True or False,
+                "dateValue": { # Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
+                  "day": 42, # Day of month. Must be from 1 to 31 and valid for the year and month.
+                  "month": 42, # Month of date. Must be from 1 to 12.
+                  "year": 42, # Year of date. Must be from 1 to 9999.
+                },
+                "doubleValue": 3.14,
+                "integerValue": "A String",
+                "stringValue": "A String",
+                "timestampValue": "A String",
+              },
+            },
+          },
+          "objectType": "A String", # If object_type is set, only objects of that type are returned. This should correspond to the name of the object that was registered within the definition of schema. The maximum length is 256 characters.
+        },
+      ],
+      "source": { # Defines sources for the suggest/search APIs. # The source of restriction.
+        "name": "A String", # Source name for content indexed by the Indexing API.
+        "predefinedSource": "A String", # Predefined content source for Google Apps.
+      },
+    },
+  ],
+  "facetOptions": [
+    { # Specifies operators to return facet results for. There will be one FacetResult for every source_name/object_type/operator_name combination.
+      "integerFacetingOptions": { # Used to specify integer faceting options. # If set, describes integer faceting options for the given integer property. The corresponding integer property in the schema should be marked isFacetable. The number of buckets returned would be minimum of this and num_facet_buckets.
+        "integerBuckets": [ # Buckets for given integer values should be in strictly ascending order. For example, if values supplied are (1,5,10,100), the following facet buckets will be formed {<1, [1,5), [5-10), [10-100), >=100}.
+          "A String",
+        ],
+      },
+      "numFacetBuckets": 42, # Maximum number of facet buckets that should be returned for this facet. Defaults to 10. Maximum value is 100.
+      "objectType": "A String", # If object_type is set, only those objects of that type will be used to compute facets. If empty, then all objects will be used to compute facets.
+      "operatorName": "A String", # The name of the operator chosen for faceting. @see cloudsearch.SchemaPropertyOptions
+      "sourceName": "A String", # Source name to facet on. Format: datasources/{source_id} If empty, all data sources will be used.
+    },
+  ],
+  "pageSize": 42, # Maximum number of search results to return in one page. Valid values are between 1 and 100, inclusive. Default value is 10. Minimum value is 50 when results beyond 2000 are requested.
+  "query": "A String", # The raw query string. See supported search operators in the [Narrow your search with operators](https://support.google.com/cloudsearch/answer/6172299)
+  "queryInterpretationOptions": { # Options to interpret user query. # Options to interpret the user query.
+    "disableNlInterpretation": True or False, # Flag to disable natural language (NL) interpretation of queries. Default is false, Set to true to disable natural language interpretation. NL interpretation only applies to predefined datasources.
+    "disableSupplementalResults": True or False, # Use this flag to disable supplemental results for a query. Supplemental results setting chosen at SearchApplication level will take precedence if set to True.
+    "enableVerbatimMode": True or False, # Enable this flag to turn off all internal optimizations like natural language (NL) interpretation of queries, supplemental result retrieval, and usage of synonyms including custom ones. Nl interpretation will be disabled if either one of the two flags is true.
+  },
+  "requestOptions": { # Shared request options for all RPC methods. # Request options, such as the search application and user timezone.
+    "debugOptions": { # Shared request debug options for all cloudsearch RPC methods. # Debug options of the request
+      "enableDebugging": True or False, # If you are asked by Google to help with debugging, set this field. Otherwise, ignore this field.
+    },
+    "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. For translations. Set this field using the language set in browser or for the page. In the event that the user's language preference is known, set this field to the known user language. When specified, the documents in search results are biased towards the specified language. The Suggest API uses this field as a hint to make better third-party autocomplete predictions.
+    "searchApplicationId": "A String", # The ID generated when you create a search application using the [admin console](https://support.google.com/a/answer/9043922).
+    "timeZone": "A String", # Current user's time zone id, such as "America/Los_Angeles" or "Australia/Sydney". These IDs are defined by [Unicode Common Locale Data Repository (CLDR)](http://cldr.unicode.org/) project, and currently available in the file [timezone.xml](http://unicode.org/repos/cldr/trunk/common/bcp47/timezone.xml). This field is used to correctly interpret date and time queries. If this field is not specified, the default time zone (UTC) is used.
+  },
+  "sortOptions": { # The options for sorting the search results
+    "operatorName": "A String", # The name of the operator corresponding to the field to sort on. The corresponding property must be marked as sortable.
+    "sortOrder": "A String", # Ascending is the default sort order
+  },
+  "start": 42, # Starting index of the results.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Debug Search Response.
+  "gsrRequest": "A String", # Serialized string of GenericSearchRequest.
+  "gsrResponse": "A String", # Serialized string of GenericSearchResponse.
+  "searchResponse": { # The search API response. # Search response.
+    "debugInfo": { # Debugging information about the response. # Debugging information about the response.
+      "formattedDebugInfo": "A String", # General debug info formatted for display.
+    },
+    "errorInfo": { # Error information about the response. # Error information about the response.
+      "errorMessages": [
+        { # Error message per source response.
+          "errorMessage": "A String",
+          "source": { # Defines sources for the suggest/search APIs.
+            "name": "A String", # Source name for content indexed by the Indexing API.
+            "predefinedSource": "A String", # Predefined content source for Google Apps.
+          },
+        },
+      ],
+    },
+    "facetResults": [ # Repeated facet results.
+      { # Source specific facet response
+        "buckets": [ # FacetBuckets for values in response containing at least a single result with the corresponding filter.
+          { # A bucket in a facet is the basic unit of operation. A bucket can comprise either a single value OR a contiguous range of values, depending on the type of the field bucketed. FacetBucket is currently used only for returning the response object.
+            "count": 42, # Number of results that match the bucket value. Counts are only returned for searches when count accuracy is ensured. Cloud Search does not guarantee facet counts for any query and facet counts might be present only intermittently, even for identical queries. Do not build dependencies on facet count existence; instead use facet ount percentages which are always returned.
+            "filter": { # A generic way of expressing filters in a query, which supports two approaches: **1. Setting a ValueFilter.** The name must match an operator_name defined in the schema for your data source. **2. Setting a CompositeFilter.** The filters are evaluated using the logical operator. The top-level operators can only be either an AND or a NOT. AND can appear only at the top-most level. OR can appear only under a top-level AND. # Filter to be passed in the search request if the corresponding bucket is selected.
+              "compositeFilter": {
+                "logicOperator": "A String", # The logic operator of the sub filter.
+                "subFilters": [ # Sub filters.
+                  # Object with schema name: Filter
+                ],
+              },
+              "valueFilter": {
+                "operatorName": "A String", # The `operator_name` applied to the query, such as *price_greater_than*. The filter can work against both types of filters defined in the schema for your data source: 1. `operator_name`, where the query filters results by the property that matches the value. 2. `greater_than_operator_name` or `less_than_operator_name` in your schema. The query filters the results for the property values that are greater than or less than the supplied value in the query.
+                "value": { # Definition of a single value with generic type. # The value to be compared with.
+                  "booleanValue": True or False,
+                  "dateValue": { # Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
+                    "day": 42, # Day of month. Must be from 1 to 31 and valid for the year and month.
+                    "month": 42, # Month of date. Must be from 1 to 12.
+                    "year": 42, # Year of date. Must be from 1 to 9999.
+                  },
+                  "doubleValue": 3.14,
+                  "integerValue": "A String",
+                  "stringValue": "A String",
+                  "timestampValue": "A String",
+                },
+              },
+            },
+            "percentage": 42, # Percent of results that match the bucket value. The returned value is between (0-100], and is rounded down to an integer if fractional. If the value is not explicitly returned, it represents a percentage value that rounds to 0. Percentages are returned for all searches, but are an estimate. Because percentages are always returned, you should render percentages instead of counts.
+            "value": { # Definition of a single value with generic type.
+              "booleanValue": True or False,
+              "dateValue": { # Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
+                "day": 42, # Day of month. Must be from 1 to 31 and valid for the year and month.
+                "month": 42, # Month of date. Must be from 1 to 12.
+                "year": 42, # Year of date. Must be from 1 to 9999.
+              },
+              "doubleValue": 3.14,
+              "integerValue": "A String",
+              "stringValue": "A String",
+              "timestampValue": "A String",
+            },
+          },
+        ],
+        "objectType": "A String", # Object type for which facet results are returned. Can be empty.
+        "operatorName": "A String", # The name of the operator chosen for faceting. @see cloudsearch.SchemaPropertyOptions
+        "sourceName": "A String", # Source name for which facet results are returned. Will not be empty.
+      },
+    ],
+    "hasMoreResults": True or False, # Whether there are more search results matching the query.
+    "queryInterpretation": { # Query interpretation result for user query. Empty if query interpretation is disabled.
+      "interpretationType": "A String",
+      "interpretedQuery": "A String", # The interpretation of the query used in search. For example, queries with natural language intent like "email from john" will be interpreted as "from:john source:mail". This field will not be filled when the reason is NOT_ENOUGH_RESULTS_FOUND_FOR_USER_QUERY.
+      "reason": "A String", # The reason for interpretation of the query. This field will not be UNSPECIFIED if the interpretation type is not NONE.
+    },
+    "resultCountEstimate": "A String", # The estimated result count for this query.
+    "resultCountExact": "A String", # The exact result count for this query.
+    "resultCounts": { # Result count information # Expanded result count information.
+      "sourceResultCounts": [ # Result count information for each source with results.
+        { # Per source result count information.
+          "hasMoreResults": True or False, # Whether there are more search results for this source.
+          "resultCountEstimate": "A String", # The estimated result count for this source.
+          "resultCountExact": "A String", # The exact result count for this source.
+          "source": { # Defines sources for the suggest/search APIs. # The source the result count information is associated with.
+            "name": "A String", # Source name for content indexed by the Indexing API.
+            "predefinedSource": "A String", # Predefined content source for Google Apps.
+          },
+        },
+      ],
+    },
+    "results": [ # Results from a search query.
+      { # Results containing indexed information for a document.
+        "clusteredResults": [ # If source is clustered, provide list of clustered results. There will only be one level of clustered results. If current source is not enabled for clustering, this field will be empty.
+          # Object with schema name: SearchResult
+        ],
+        "debugInfo": { # Debugging information about the result. # Debugging information about this search result.
+          "formattedDebugInfo": "A String", # General debug info formatted for display.
+        },
+        "metadata": { # Metadata of a matched search result. # Metadata of the search result.
+          "createTime": "A String", # The creation time for this document or object in the search result.
+          "displayOptions": { # Options that specify how to display a structured data search result.
+            "metalines": [ # The metalines content to be displayed with the result.
+              { # The collection of fields that make up a displayed line
+                "fields": [
+                  { # Display Fields for Search Results
+                    "label": "A String", # The display label for the property.
+                    "operatorName": "A String", # The operator name of the property.
+                    "property": { # A typed name-value pair for structured data. The type of the value should be the same as the registered type for the `name` property in the object definition of `objectType`. # The name value pair for the property.
+                      "booleanValue": True or False,
+                      "dateValues": { # List of date values.
+                        "values": [
+                          { # Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
+                            "day": 42, # Day of month. Must be from 1 to 31 and valid for the year and month.
+                            "month": 42, # Month of date. Must be from 1 to 12.
+                            "year": 42, # Year of date. Must be from 1 to 9999.
+                          },
+                        ],
+                      },
+                      "doubleValues": { # List of double values.
+                        "values": [
+                          3.14,
+                        ],
+                      },
+                      "enumValues": { # List of enum values.
+                        "values": [ # The maximum allowable length for string values is 32 characters.
+                          "A String",
+                        ],
+                      },
+                      "htmlValues": { # List of html values.
+                        "values": [ # The maximum allowable length for html values is 2048 characters.
+                          "A String",
+                        ],
+                      },
+                      "integerValues": { # List of integer values.
+                        "values": [
+                          "A String",
+                        ],
+                      },
+                      "name": "A String", # The name of the property. This name should correspond to the name of the property that was registered for object definition in the schema. The maximum allowable length for this property is 256 characters.
+                      "objectValues": { # List of object values.
+                        "values": [
+                          # Object with schema name: StructuredDataObject
+                        ],
+                      },
+                      "textValues": { # List of text values.
+                        "values": [ # The maximum allowable length for text values is 2048 characters.
+                          "A String",
+                        ],
+                      },
+                      "timestampValues": { # List of timestamp values.
+                        "values": [
+                          "A String",
+                        ],
+                      },
+                    },
+                  },
+                ],
+              },
+            ],
+            "objectTypeLabel": "A String", # The display label for the object.
+          },
+          "fields": [ # Indexed fields in structured data, returned as a generic named property.
+            { # A typed name-value pair for structured data. The type of the value should be the same as the registered type for the `name` property in the object definition of `objectType`.
+              "booleanValue": True or False,
+              "dateValues": { # List of date values.
+                "values": [
+                  { # Represents a whole calendar date, for example a date of birth. The time of day and time zone are either specified elsewhere or are not significant. The date is relative to the [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar). The date must be a valid calendar date between the year 1 and 9999.
+                    "day": 42, # Day of month. Must be from 1 to 31 and valid for the year and month.
+                    "month": 42, # Month of date. Must be from 1 to 12.
+                    "year": 42, # Year of date. Must be from 1 to 9999.
+                  },
+                ],
+              },
+              "doubleValues": { # List of double values.
+                "values": [
+                  3.14,
+                ],
+              },
+              "enumValues": { # List of enum values.
+                "values": [ # The maximum allowable length for string values is 32 characters.
+                  "A String",
+                ],
+              },
+              "htmlValues": { # List of html values.
+                "values": [ # The maximum allowable length for html values is 2048 characters.
+                  "A String",
+                ],
+              },
+              "integerValues": { # List of integer values.
+                "values": [
+                  "A String",
+                ],
+              },
+              "name": "A String", # The name of the property. This name should correspond to the name of the property that was registered for object definition in the schema. The maximum allowable length for this property is 256 characters.
+              "objectValues": { # List of object values.
+                "values": [
+                  # Object with schema name: StructuredDataObject
+                ],
+              },
+              "textValues": { # List of text values.
+                "values": [ # The maximum allowable length for text values is 2048 characters.
+                  "A String",
+                ],
+              },
+              "timestampValues": { # List of timestamp values.
+                "values": [
+                  "A String",
+                ],
+              },
+            },
+          ],
+          "mimeType": "A String", # Mime type of the search result.
+          "objectType": "A String", # Object type of the search result.
+          "owner": { # Object to represent a person. # Owner (usually creator) of the document or object of the search result.
+            "emailAddresses": [ # The person's email addresses
+              { # A person's email address.
+                "customType": "A String", # If the value of type is custom, this property contains the custom type string.
+                "emailAddress": "A String", # The email address.
+                "emailUrl": "A String", # The URL to send email.
+                "primary": True or False, # Indicates if this is the user's primary email. Only one entry can be marked as primary.
+                "type": "A String", # The type of the email account. Acceptable values are: "custom", "home", "other", "work".
+              },
+            ],
+            "name": "A String", # The resource name of the person to provide information about. See [`People.get`](https://developers.google.com/people/api/rest/v1/people/get) from the Google People API.
+            "obfuscatedId": "A String", # Obfuscated ID of a person.
+            "personNames": [ # The person's name
+              { # A person's name.
+                "displayName": "A String", # The read-only display name formatted according to the locale specified by the viewer's account or the `Accept-Language` HTTP header.
+              },
+            ],
+            "phoneNumbers": [ # The person's phone numbers
+              { # A person's Phone Number
+                "phoneNumber": "A String", # The phone number of the person.
+                "type": "A String",
+              },
+            ],
+            "photos": [ # A person's read-only photo. A picture shown next to the person's name to help others recognize the person in search results.
+              { # A person's photo.
+                "url": "A String", # The URL of the photo.
+              },
+            ],
+          },
+          "source": { # Defines sources for the suggest/search APIs. # The named source for the result, such as Gmail.
+            "name": "A String", # Source name for content indexed by the Indexing API.
+            "predefinedSource": "A String", # Predefined content source for Google Apps.
+          },
+          "thumbnailUrl": "A String", # The thumbnail URL of the result.
+          "updateTime": "A String", # The last modified date for the object in the search result. If not set in the item, the value returned here is empty. When `updateTime` is used for calculating freshness and is not set, this value defaults to 2 years from the current time.
+        },
+        "snippet": { # Snippet of the search result, which summarizes the content of the resulting page. # The concatenation of all snippets (summaries) available for this result.
+          "matchRanges": [ # The matched ranges in the snippet.
+            { # Matched range of a snippet [start, end).
+              "end": 42, # End of the match in the snippet.
+              "start": 42, # Starting position of the match in the snippet.
+            },
+          ],
+          "snippet": "A String", # The snippet of the document. May contain escaped HTML character that should be unescaped prior to rendering.
+        },
+        "title": "A String", # Title of the search result.
+        "url": "A String", # The URL of the search result. The URL contains a Google redirect to the actual item. This URL is signed and shouldn't be changed.
+      },
+    ],
+    "spellResults": [ # Suggested spelling for the query.
+      {
+        "suggestedQuery": "A String", # The suggested spelling of the query.
+        "suggestedQueryHtml": { # IMPORTANT: It is unsafe to accept this message from an untrusted source, since it's trivial for an attacker to forge serialized messages that don't fulfill the type's safety contract -- for example, it could contain attacker controlled script. A system which receives a SafeHtmlProto implicitly trusts the producer of the SafeHtmlProto. So, it's generally safe to return this message in RPC responses, but generally unsafe to accept it in RPC requests. # The sanitized HTML representing the spell corrected query that can be used in the UI. This usually has language-specific tags to mark up parts of the query that are spell checked.
+          "privateDoNotAccessOrElseSafeHtmlWrappedValue": "A String", # IMPORTANT: Never set or read this field, even from tests, it is private. See documentation at the top of .proto file for programming language packages with which to create or read this message.
+        },
+        "suggestionType": "A String", # Suggestion triggered for the current query.
+      },
+    ],
+    "structuredResults": [ # Structured results for the user query. These results are not counted against the page_size.
+      { # Structured results that are returned as part of search request.
+        "person": { # Object to represent a person. # Representation of a person
+          "emailAddresses": [ # The person's email addresses
+            { # A person's email address.
+              "customType": "A String", # If the value of type is custom, this property contains the custom type string.
+              "emailAddress": "A String", # The email address.
+              "emailUrl": "A String", # The URL to send email.
+              "primary": True or False, # Indicates if this is the user's primary email. Only one entry can be marked as primary.
+              "type": "A String", # The type of the email account. Acceptable values are: "custom", "home", "other", "work".
+            },
+          ],
+          "name": "A String", # The resource name of the person to provide information about. See [`People.get`](https://developers.google.com/people/api/rest/v1/people/get) from the Google People API.
+          "obfuscatedId": "A String", # Obfuscated ID of a person.
+          "personNames": [ # The person's name
+            { # A person's name.
+              "displayName": "A String", # The read-only display name formatted according to the locale specified by the viewer's account or the `Accept-Language` HTTP header.
+            },
+          ],
+          "phoneNumbers": [ # The person's phone numbers
+            { # A person's Phone Number
+              "phoneNumber": "A String", # The phone number of the person.
+              "type": "A String",
+            },
+          ],
+          "photos": [ # A person's read-only photo. A picture shown next to the person's name to help others recognize the person in search results.
+            { # A person's photo.
+              "url": "A String", # The URL of the photo.
+            },
+          ],
+        },
+      },
+    ],
+  },
+}
+
+
removeActivity(body=None, x__xgafv=None)
Provides functionality to remove logged activity for a user. Currently to be used only for Chat 1p clients **Note:** This API requires a standard end user account to execute. A service account can't perform Remove Activity requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).
diff --git a/docs/dyn/compute_alpha.networks.html b/docs/dyn/compute_alpha.networks.html
index 6f54ae6cf88..e773ccef53c 100644
--- a/docs/dyn/compute_alpha.networks.html
+++ b/docs/dyn/compute_alpha.networks.html
@@ -443,6 +443,7 @@ 

Method Details

"name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`. The first character must be a lowercase letter, and all following characters (except for the last character) must be a dash, lowercase letter, or digit. The last character must be a lowercase letter or digit. "networkFirewallPolicyEnforcementOrder": "A String", # The network firewall policy enforcement order. Can be either AFTER_CLASSIC_FIREWALL or BEFORE_CLASSIC_FIREWALL. Defaults to AFTER_CLASSIC_FIREWALL if the field is not specified. "networkPlacement": "A String", # A full or partial URL of the network placement to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkPlacements/{network_placement_name} - projects/{project_id}/global/networkPlacements/{network_placement_name} + "networkProfile": "A String", # A full or partial URL of the network profile to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkProfiles/{network_profile_name} - projects/{project_id}/global/networkProfiles/{network_profile_name} "peerings": [ # [Output Only] A list of network peerings for the resource. { # A network peering attached to a network resource. The message includes the peering name, peer network, peering state, and a flag indicating whether Google Compute Engine should automatically create routes for the peering. "advertisePeerSubnetsViaRouters": True or False, # Whether Cloud Routers in this network can automatically advertise subnets from the peer network. @@ -834,6 +835,7 @@

Method Details

"name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`. The first character must be a lowercase letter, and all following characters (except for the last character) must be a dash, lowercase letter, or digit. The last character must be a lowercase letter or digit. "networkFirewallPolicyEnforcementOrder": "A String", # The network firewall policy enforcement order. Can be either AFTER_CLASSIC_FIREWALL or BEFORE_CLASSIC_FIREWALL. Defaults to AFTER_CLASSIC_FIREWALL if the field is not specified. "networkPlacement": "A String", # A full or partial URL of the network placement to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkPlacements/{network_placement_name} - projects/{project_id}/global/networkPlacements/{network_placement_name} + "networkProfile": "A String", # A full or partial URL of the network profile to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkProfiles/{network_profile_name} - projects/{project_id}/global/networkProfiles/{network_profile_name} "peerings": [ # [Output Only] A list of network peerings for the resource. { # A network peering attached to a network resource. The message includes the peering name, peer network, peering state, and a flag indicating whether Google Compute Engine should automatically create routes for the peering. "advertisePeerSubnetsViaRouters": True or False, # Whether Cloud Routers in this network can automatically advertise subnets from the peer network. @@ -1020,6 +1022,7 @@

Method Details

"name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`. The first character must be a lowercase letter, and all following characters (except for the last character) must be a dash, lowercase letter, or digit. The last character must be a lowercase letter or digit. "networkFirewallPolicyEnforcementOrder": "A String", # The network firewall policy enforcement order. Can be either AFTER_CLASSIC_FIREWALL or BEFORE_CLASSIC_FIREWALL. Defaults to AFTER_CLASSIC_FIREWALL if the field is not specified. "networkPlacement": "A String", # A full or partial URL of the network placement to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkPlacements/{network_placement_name} - projects/{project_id}/global/networkPlacements/{network_placement_name} + "networkProfile": "A String", # A full or partial URL of the network profile to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkProfiles/{network_profile_name} - projects/{project_id}/global/networkProfiles/{network_profile_name} "peerings": [ # [Output Only] A list of network peerings for the resource. { # A network peering attached to a network resource. The message includes the peering name, peer network, peering state, and a flag indicating whether Google Compute Engine should automatically create routes for the peering. "advertisePeerSubnetsViaRouters": True or False, # Whether Cloud Routers in this network can automatically advertise subnets from the peer network. @@ -1301,6 +1304,7 @@

Method Details

"name": "A String", # Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`. The first character must be a lowercase letter, and all following characters (except for the last character) must be a dash, lowercase letter, or digit. The last character must be a lowercase letter or digit. "networkFirewallPolicyEnforcementOrder": "A String", # The network firewall policy enforcement order. Can be either AFTER_CLASSIC_FIREWALL or BEFORE_CLASSIC_FIREWALL. Defaults to AFTER_CLASSIC_FIREWALL if the field is not specified. "networkPlacement": "A String", # A full or partial URL of the network placement to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkPlacements/{network_placement_name} - projects/{project_id}/global/networkPlacements/{network_placement_name} + "networkProfile": "A String", # A full or partial URL of the network profile to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkProfiles/{network_profile_name} - projects/{project_id}/global/networkProfiles/{network_profile_name} "peerings": [ # [Output Only] A list of network peerings for the resource. { # A network peering attached to a network resource. The message includes the peering name, peer network, peering state, and a flag indicating whether Google Compute Engine should automatically create routes for the peering. "advertisePeerSubnetsViaRouters": True or False, # Whether Cloud Routers in this network can automatically advertise subnets from the peer network. diff --git a/docs/dyn/compute_alpha.regionZones.html b/docs/dyn/compute_alpha.regionZones.html index 871055f89eb..d737f9c6c1a 100644 --- a/docs/dyn/compute_alpha.regionZones.html +++ b/docs/dyn/compute_alpha.regionZones.html @@ -112,7 +112,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/compute_alpha.zones.html b/docs/dyn/compute_alpha.zones.html index b788aa4da59..57c0b5b3a45 100644 --- a/docs/dyn/compute_alpha.zones.html +++ b/docs/dyn/compute_alpha.zones.html @@ -107,7 +107,7 @@

Method Details

Returns: An object of the form: - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], @@ -158,7 +158,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/compute_beta.instanceGroupManagers.html b/docs/dyn/compute_beta.instanceGroupManagers.html index 7d35f770a52..94b629af2ff 100644 --- a/docs/dyn/compute_beta.instanceGroupManagers.html +++ b/docs/dyn/compute_beta.instanceGroupManagers.html @@ -400,6 +400,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -422,6 +426,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1329,6 +1335,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1351,6 +1361,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1506,6 +1518,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1528,6 +1544,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1812,6 +1830,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1834,6 +1856,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -2135,6 +2159,7 @@

Method Details

}, "propertiesFromFlexibilityPolicy": { # [Output Only] Instance properties selected for this instance resulting from InstanceFlexibilityPolicy. "machineType": "A String", # The machine type to be used for this instance. + "provisioningModel": "A String", # The provisioning model to be used for this instance. }, "targetStatus": "A String", # [Output Only] The eventual status of the instance. The instance group manager will not be identified as stable till each managed instance reaches its targetStatus. "version": { # [Output Only] Intended version of this instance. @@ -2336,6 +2361,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -2358,6 +2387,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -4165,6 +4196,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -4187,6 +4222,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. diff --git a/docs/dyn/compute_beta.instanceTemplates.html b/docs/dyn/compute_beta.instanceTemplates.html index 96610855977..6f3dd5e6efc 100644 --- a/docs/dyn/compute_beta.instanceTemplates.html +++ b/docs/dyn/compute_beta.instanceTemplates.html @@ -205,7 +205,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -213,8 +213,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -256,7 +256,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -699,7 +699,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -707,8 +707,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -750,7 +750,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -1112,7 +1112,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1120,8 +1120,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1163,7 +1163,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -1559,7 +1559,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1567,8 +1567,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1610,7 +1610,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", diff --git a/docs/dyn/compute_beta.instances.html b/docs/dyn/compute_beta.instances.html index 74e93b8d26b..26bfa07cfed 100644 --- a/docs/dyn/compute_beta.instances.html +++ b/docs/dyn/compute_beta.instances.html @@ -616,7 +616,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -624,8 +624,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -667,7 +667,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -995,7 +995,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1003,8 +1003,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1046,7 +1046,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -1248,7 +1248,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1256,8 +1256,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1299,7 +1299,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -2076,7 +2076,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -2084,8 +2084,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -2127,7 +2127,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -3016,7 +3016,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3024,8 +3024,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3067,7 +3067,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -3503,7 +3503,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3511,8 +3511,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3554,7 +3554,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -7407,7 +7407,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -7415,8 +7415,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -7458,7 +7458,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", diff --git a/docs/dyn/compute_beta.machineImages.html b/docs/dyn/compute_beta.machineImages.html index 8307e0bb8cc..77161ccf8c4 100644 --- a/docs/dyn/compute_beta.machineImages.html +++ b/docs/dyn/compute_beta.machineImages.html @@ -316,7 +316,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -324,8 +324,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -367,7 +367,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -912,7 +912,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -920,8 +920,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -963,7 +963,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -1538,7 +1538,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1546,8 +1546,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1589,7 +1589,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", diff --git a/docs/dyn/compute_beta.networks.html b/docs/dyn/compute_beta.networks.html index 21be92bf2fe..cadc7eb484c 100644 --- a/docs/dyn/compute_beta.networks.html +++ b/docs/dyn/compute_beta.networks.html @@ -106,7 +106,7 @@

Instance Methods

Retrieves the next page of results.

patch(project, network, body=None, requestId=None, x__xgafv=None)

-

Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.

+

Patches the specified network with the data included in the request. Only routingConfig can be modified.

removePeering(project, network, body=None, requestId=None, x__xgafv=None)

Removes a peering from the specified network.

@@ -1093,7 +1093,7 @@

Method Details

patch(project, network, body=None, requestId=None, x__xgafv=None) -
Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.
+  
Patches the specified network with the data included in the request. Only routingConfig can be modified.
 
 Args:
   project: string, Project ID for this request. (required)
diff --git a/docs/dyn/compute_beta.regionInstanceGroupManagers.html b/docs/dyn/compute_beta.regionInstanceGroupManagers.html
index 2553b5b14b5..3a97cfe0e54 100644
--- a/docs/dyn/compute_beta.regionInstanceGroupManagers.html
+++ b/docs/dyn/compute_beta.regionInstanceGroupManagers.html
@@ -1086,6 +1086,10 @@ 

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1108,6 +1112,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1263,6 +1269,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1285,6 +1295,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1569,6 +1581,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -1591,6 +1607,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -1892,6 +1910,7 @@

Method Details

}, "propertiesFromFlexibilityPolicy": { # [Output Only] Instance properties selected for this instance resulting from InstanceFlexibilityPolicy. "machineType": "A String", # The machine type to be used for this instance. + "provisioningModel": "A String", # The provisioning model to be used for this instance. }, "targetStatus": "A String", # [Output Only] The eventual status of the instance. The instance group manager will not be identified as stable till each managed instance reaches its targetStatus. "version": { # [Output Only] Intended version of this instance. @@ -2093,6 +2112,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -2115,6 +2138,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. @@ -3922,6 +3947,10 @@

Method Details

"rank": 42, # Preference of this instance selection. Lower number means higher preference. MIG will first try to create a VM based on the machine-type with lowest rank and fallback to next rank based on availability. Machine types and instance selections with the same rank have the same preference. }, }, + "provisioningModelMix": { # Provisioning model configuration used by this managed instance group to create instances. + "standardCapacityBase": 42, # The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. + "standardCapacityPercentAboveBase": 42, # The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. + }, }, "instanceGroup": "A String", # [Output Only] The URL of the Instance Group resource. "instanceLifecyclePolicy": { # The repair policy for this managed instance group. @@ -3944,6 +3973,8 @@

Method Details

}, }, "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "serviceAccount": "A String", # The service account to be used as credentials for all operations performed by the managed instance group on instances. The service accounts needs all permissions required to create and delete instances. By default, the service account {projectNumber}@cloudservices.gserviceaccount.com is used. "standbyPolicy": { # Standby policy for stopped and suspended instances. diff --git a/docs/dyn/compute_beta.regionInstanceTemplates.html b/docs/dyn/compute_beta.regionInstanceTemplates.html index 624923c00df..611ffe9837e 100644 --- a/docs/dyn/compute_beta.regionInstanceTemplates.html +++ b/docs/dyn/compute_beta.regionInstanceTemplates.html @@ -315,7 +315,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -323,8 +323,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -366,7 +366,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -629,7 +629,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -637,8 +637,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -680,7 +680,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", @@ -1077,7 +1077,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1085,8 +1085,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1128,7 +1128,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", diff --git a/docs/dyn/compute_beta.regionInstances.html b/docs/dyn/compute_beta.regionInstances.html index f4178039776..f92c2b2bd29 100644 --- a/docs/dyn/compute_beta.regionInstances.html +++ b/docs/dyn/compute_beta.regionInstances.html @@ -158,7 +158,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -166,8 +166,8 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceInstantSnapshot": "A String", # The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -209,7 +209,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. "userLicenses": [ # [Output Only] A list of user provided licenses. It represents a list of URLs to the license resource. Unlike regular licenses, user provided licenses can be modified after the disk is created. "A String", diff --git a/docs/dyn/compute_beta.regionZones.html b/docs/dyn/compute_beta.regionZones.html index eda5fa3d064..3a92644987f 100644 --- a/docs/dyn/compute_beta.regionZones.html +++ b/docs/dyn/compute_beta.regionZones.html @@ -112,7 +112,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/compute_beta.zones.html b/docs/dyn/compute_beta.zones.html index 4bf0b6c3872..c72efc5bce1 100644 --- a/docs/dyn/compute_beta.zones.html +++ b/docs/dyn/compute_beta.zones.html @@ -107,7 +107,7 @@

Method Details

Returns: An object of the form: - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], @@ -158,7 +158,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/compute_v1.instanceGroupManagers.html b/docs/dyn/compute_v1.instanceGroupManagers.html index 5b411ea82d7..ea71371d261 100644 --- a/docs/dyn/compute_v1.instanceGroupManagers.html +++ b/docs/dyn/compute_v1.instanceGroupManagers.html @@ -374,6 +374,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1270,6 +1272,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1414,6 +1418,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1687,6 +1693,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -2168,6 +2176,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. diff --git a/docs/dyn/compute_v1.instanceTemplates.html b/docs/dyn/compute_v1.instanceTemplates.html index 5910f464fcb..d84f56ec3f6 100644 --- a/docs/dyn/compute_v1.instanceTemplates.html +++ b/docs/dyn/compute_v1.instanceTemplates.html @@ -198,7 +198,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -206,7 +206,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -247,7 +247,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -648,7 +648,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -656,7 +656,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -697,7 +697,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -1022,7 +1022,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1030,7 +1030,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1071,7 +1071,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -1425,7 +1425,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1433,7 +1433,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1474,7 +1474,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], diff --git a/docs/dyn/compute_v1.instances.html b/docs/dyn/compute_v1.instances.html index 060be2b0284..42dbdca60c7 100644 --- a/docs/dyn/compute_v1.instances.html +++ b/docs/dyn/compute_v1.instances.html @@ -594,7 +594,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -602,7 +602,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -643,7 +643,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -931,7 +931,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -939,7 +939,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -980,7 +980,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. } @@ -1172,7 +1172,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1180,7 +1180,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1221,7 +1221,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -1956,7 +1956,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1964,7 +1964,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -2005,7 +2005,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -2633,7 +2633,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -2641,7 +2641,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -2682,7 +2682,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -3072,7 +3072,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3080,7 +3080,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -3121,7 +3121,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -6626,7 +6626,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -6634,7 +6634,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -6675,7 +6675,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], diff --git a/docs/dyn/compute_v1.machineImages.html b/docs/dyn/compute_v1.machineImages.html index 233d15d3323..bdef0a55730 100644 --- a/docs/dyn/compute_v1.machineImages.html +++ b/docs/dyn/compute_v1.machineImages.html @@ -309,7 +309,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -317,7 +317,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -358,7 +358,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -854,7 +854,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -862,7 +862,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -903,7 +903,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -1429,7 +1429,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1437,7 +1437,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1478,7 +1478,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], diff --git a/docs/dyn/compute_v1.networks.html b/docs/dyn/compute_v1.networks.html index 5aea0c5d43b..95253b3ede6 100644 --- a/docs/dyn/compute_v1.networks.html +++ b/docs/dyn/compute_v1.networks.html @@ -106,7 +106,7 @@

Instance Methods

Retrieves the next page of results.

patch(project, network, body=None, requestId=None, x__xgafv=None)

-

Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.

+

Patches the specified network with the data included in the request. Only routingConfig can be modified.

removePeering(project, network, body=None, requestId=None, x__xgafv=None)

Removes a peering from the specified network.

@@ -928,7 +928,7 @@

Method Details

patch(project, network, body=None, requestId=None, x__xgafv=None) -
Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.
+  
Patches the specified network with the data included in the request. Only routingConfig can be modified.
 
 Args:
   project: string, Project ID for this request. (required)
diff --git a/docs/dyn/compute_v1.regionInstanceGroupManagers.html b/docs/dyn/compute_v1.regionInstanceGroupManagers.html
index f6469442f74..d545c66f463 100644
--- a/docs/dyn/compute_v1.regionInstanceGroupManagers.html
+++ b/docs/dyn/compute_v1.regionInstanceGroupManagers.html
@@ -1060,6 +1060,8 @@ 

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1204,6 +1206,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1477,6 +1481,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. @@ -1958,6 +1964,8 @@

Method Details

}, ], "region": "A String", # [Output Only] The URL of the region where the managed instance group resides (for regional resources). + "satisfiesPzi": True or False, # [Output Only] Reserved for future use. + "satisfiesPzs": True or False, # [Output Only] Reserved for future use. "selfLink": "A String", # [Output Only] The URL for this managed instance group. The server defines this URL. "statefulPolicy": { # Stateful configuration for this Instanced Group Manager "preservedState": { # Configuration of preserved resources. diff --git a/docs/dyn/compute_v1.regionInstanceTemplates.html b/docs/dyn/compute_v1.regionInstanceTemplates.html index 0ddee561aaa..5506d7bbe84 100644 --- a/docs/dyn/compute_v1.regionInstanceTemplates.html +++ b/docs/dyn/compute_v1.regionInstanceTemplates.html @@ -303,7 +303,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -311,7 +311,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -352,7 +352,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -578,7 +578,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -586,7 +586,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -627,7 +627,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], @@ -982,7 +982,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -990,7 +990,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -1031,7 +1031,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], diff --git a/docs/dyn/compute_v1.regionInstances.html b/docs/dyn/compute_v1.regionInstances.html index 7822dba3372..5295fda869e 100644 --- a/docs/dyn/compute_v1.regionInstances.html +++ b/docs/dyn/compute_v1.regionInstances.html @@ -151,7 +151,7 @@

Method Details

"resourcePolicies": [ # Resource policies applied to this disk for automatic snapshot creations. Specified using the full or partial URL. For instance template, specify only the resource policy name. "A String", ], - "sourceImage": "A String", # The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. + "sourceImage": "A String", # The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set. "sourceImageEncryptionKey": { # The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -159,7 +159,7 @@

Method Details

"rsaEncryptedKey": "A String", # Specifies an RFC 4648 base64 encoded, RSA-wrapped 2048-bit customer-supplied encryption key to either encrypt or decrypt this resource. You can provide either the rawKey or the rsaEncryptedKey. For example: "rsaEncryptedKey": "ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFH z0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoD D6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oe==" The key must meet the following requirements before you can provide it to Compute Engine: 1. The key is wrapped using a RSA public key certificate provided by Google. 2. After being wrapped, the key must be encoded in RFC 4648 base64 encoding. Gets the RSA public key certificate provided by Google at: https://cloud-certs.storage.googleapis.com/google-cloud-csek-ingress.pem "sha256": "A String", # [Output only] The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource. }, - "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. + "sourceSnapshot": "A String", # The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set. "sourceSnapshotEncryptionKey": { # The customer-supplied encryption key of the source snapshot. "kmsKeyName": "A String", # The name of the encryption key that is stored in Google Cloud KMS. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key The fully-qualifed key name may be returned for resource GET requests. For example: "kmsKeyName": "projects/kms_project_id/locations/region/keyRings/ key_region/cryptoKeys/key /cryptoKeyVersions/1 "kmsKeyServiceAccount": "A String", # The service account being used for the encryption request for the given KMS key. If absent, the Compute Engine default service account is used. For example: "kmsKeyServiceAccount": "name@project_id.iam.gserviceaccount.com/ @@ -200,7 +200,7 @@

Method Details

"fileType": "A String", # The file type of source file. }, }, - "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. + "source": "A String", # Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk. "type": "A String", # Specifies the type of the disk, either SCRATCH or PERSISTENT. If not specified, the default is PERSISTENT. }, ], diff --git a/docs/dyn/compute_v1.regionTargetHttpsProxies.html b/docs/dyn/compute_v1.regionTargetHttpsProxies.html index dd3d5a9fda2..1a2e2e0504c 100644 --- a/docs/dyn/compute_v1.regionTargetHttpsProxies.html +++ b/docs/dyn/compute_v1.regionTargetHttpsProxies.html @@ -267,6 +267,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map }
@@ -300,6 +301,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map } @@ -462,6 +464,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map }, ], @@ -525,6 +528,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map } diff --git a/docs/dyn/compute_v1.regionZones.html b/docs/dyn/compute_v1.regionZones.html index b3301391c9a..1885d93598d 100644 --- a/docs/dyn/compute_v1.regionZones.html +++ b/docs/dyn/compute_v1.regionZones.html @@ -112,7 +112,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/compute_v1.targetHttpsProxies.html b/docs/dyn/compute_v1.targetHttpsProxies.html index 03233c7f5d8..43bf45df923 100644 --- a/docs/dyn/compute_v1.targetHttpsProxies.html +++ b/docs/dyn/compute_v1.targetHttpsProxies.html @@ -162,6 +162,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map }, ], @@ -373,6 +374,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map }
@@ -405,6 +407,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map } @@ -566,6 +569,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map }, ], @@ -628,6 +632,7 @@

Method Details

"A String", ], "sslPolicy": "A String", # URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured. + "tlsEarlyData": "A String", # Specifies whether TLS 1.3 0-RTT Data ("Early Data") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to "zero". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the "Early-Data" HTTP header set on the request, with a value of "1", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the "Early-Data: 1" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED. "urlMap": "A String", # A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map } diff --git a/docs/dyn/compute_v1.zones.html b/docs/dyn/compute_v1.zones.html index b5cf6d51b5f..6deb1de2066 100644 --- a/docs/dyn/compute_v1.zones.html +++ b/docs/dyn/compute_v1.zones.html @@ -107,7 +107,7 @@

Method Details

Returns: An object of the form: - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], @@ -152,7 +152,7 @@

Method Details

{ # Contains a list of zone resources. "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of Zone resources. - { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones. + { # Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones. "availableCpuPlatforms": [ # [Output Only] Available cpu/platform selections for the zone. "A String", ], diff --git a/docs/dyn/connectors_v1.projects.locations.providers.connectors.html b/docs/dyn/connectors_v1.projects.locations.providers.connectors.html index 1c148299209..32a4f2a12d4 100644 --- a/docs/dyn/connectors_v1.projects.locations.providers.connectors.html +++ b/docs/dyn/connectors_v1.projects.locations.providers.connectors.html @@ -112,6 +112,7 @@

Method Details

An object of the form: { # Connectors indicates a specific connector type, e.x. Salesforce, SAP etc. + "category": "A String", # Output only. Category of the connector. "createTime": "A String", # Output only. Created time. "description": "A String", # Output only. Description of the resource. "displayName": "A String", # Output only. Display name. @@ -134,6 +135,9 @@

Method Details

}, "launchStage": "A String", # Output only. Flag to mark the version indicating the launch stage. "name": "A String", # Output only. Resource name of the Connector. Format: projects/{project}/locations/{location}/providers/{provider}/connectors/{connector} Only global location is supported for Connector resource. + "tags": [ # Output only. Tags of the connector. + "A String", + ], "updateTime": "A String", # Output only. Updated time. "webAssetsLocation": "A String", # Output only. Cloud storage location of icons etc consumed by UI. }
@@ -159,6 +163,7 @@

Method Details

{ # Response message for Connectors.ListConnectors. "connectors": [ # A list of connectors. { # Connectors indicates a specific connector type, e.x. Salesforce, SAP etc. + "category": "A String", # Output only. Category of the connector. "createTime": "A String", # Output only. Created time. "description": "A String", # Output only. Description of the resource. "displayName": "A String", # Output only. Display name. @@ -181,6 +186,9 @@

Method Details

}, "launchStage": "A String", # Output only. Flag to mark the version indicating the launch stage. "name": "A String", # Output only. Resource name of the Connector. Format: projects/{project}/locations/{location}/providers/{provider}/connectors/{connector} Only global location is supported for Connector resource. + "tags": [ # Output only. Tags of the connector. + "A String", + ], "updateTime": "A String", # Output only. Updated time. "webAssetsLocation": "A String", # Output only. Cloud storage location of icons etc consumed by UI. }, diff --git a/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.entitieswithacls.html b/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.entitieswithacls.html new file mode 100644 index 00000000000..a6cbef26b24 --- /dev/null +++ b/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.entitieswithacls.html @@ -0,0 +1,148 @@ + + + +

Connectors API . projects . locations . connections . entityTypes . entitieswithacls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ list(parent, conditions=None, gsutilUri=None, pageSize=None, pageToken=None, sortBy=None, x__xgafv=None)

+

Lists entity rows with ACLs of a particular entity type contained in the request. Note: 1. Currently, only max of one 'sort_by' column is supported. 2. If no 'sort_by' column is provided, the primary key of the table is used. If zero or more than one primary key is available, we default to the unpaginated list entities logic which only returns the first page. 3. The values of the 'sort_by' columns must uniquely identify an entity row, otherwise undefined behaviors may be observed during pagination. 4. Since transactions are not supported, any updates, inserts or deletes during pagination can lead to stale data being returned or other unexpected behaviors.

+

+ list_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ list(parent, conditions=None, gsutilUri=None, pageSize=None, pageToken=None, sortBy=None, x__xgafv=None) +
Lists entity rows with ACLs of a particular entity type contained in the request. Note: 1. Currently, only max of one 'sort_by' column is supported. 2. If no 'sort_by' column is provided, the primary key of the table is used. If zero or more than one primary key is available, we default to the unpaginated list entities logic which only returns the first page. 3. The values of the 'sort_by' columns must uniquely identify an entity row, otherwise undefined behaviors may be observed during pagination. 4. Since transactions are not supported, any updates, inserts or deletes during pagination can lead to stale data being returned or other unexpected behaviors.
+
+Args:
+  parent: string, Required. Resource name of the Entity Type. Format: projects/{project}/locations/{location}/connections/{connection}/entityTypes/{type} (required)
+  conditions: string, Conditions to be used when listing entities. From a proto standpoint, There are no restrictions on what can be passed using this field. The connector documentation should have information about what format of filters/conditions are supported.
+  gsutilUri: string, Format: gs://object_path
+  pageSize: integer, Number of entity rows to return. Defaults page size = 25. Max page size = 200.
+  pageToken: string, Page token value if available from a previous request.
+  sortBy: string, List of 'sort_by' columns to use when returning the results. (repeated)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for EntityService.ListEntitiesWithACLs
+  "entitiesWithAcl": [ # List containing entity rows.
+    { # EntityWithACL refers to a single row of an entity type with ACL information.
+      "acl_info": { # AclInfo has a list of readers for a resource. This is defined as per the below docs https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/projects.locations.collections.dataStores.branches.documents#aclinfo # ACL information of the entity.
+        "readers": [ # A list of readers for a resource.
+          { # Readers is a list of principals that have read access to a resource.
+            "principals": [ # A list of principals that have read access to a resource.
+              { # Principal is a user or group that has access to a resource.
+                "group_id": "A String", # The group that has access to a resource.
+                "user_id": "A String", # The user that has access to a resource.
+              },
+            ],
+          },
+        ],
+      },
+      "id": "A String",
+      "jsonData": "A String", # Entity data in JSON format.
+    },
+  ],
+  "nextPageToken": "A String", # Next page token if more records are available.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.html b/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.html index 85b93f52879..b54bea8be30 100644 --- a/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.html +++ b/docs/dyn/connectors_v2.projects.locations.connections.entityTypes.html @@ -79,6 +79,11 @@

Instance Methods

Returns the entities Resource.

+

+ entitieswithacls() +

+

Returns the entitieswithacls Resource.

+

close()

Close httplib2 connections.

diff --git a/docs/dyn/contactcenteraiplatform_v1alpha1.projects.locations.contactCenters.html b/docs/dyn/contactcenteraiplatform_v1alpha1.projects.locations.contactCenters.html index 42629bd445b..9e9da011b60 100644 --- a/docs/dyn/contactcenteraiplatform_v1alpha1.projects.locations.contactCenters.html +++ b/docs/dyn/contactcenteraiplatform_v1alpha1.projects.locations.contactCenters.html @@ -159,6 +159,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -169,6 +172,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -329,6 +335,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -339,6 +348,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -439,6 +451,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -449,6 +464,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -558,6 +576,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], @@ -568,6 +589,9 @@

Method Details

"name": "A String", # Name of the component. "serviceAttachments": [ # Associated service attachments. { # Container for the VPC-SC networking configurations. + "allowedProjectIds": [ # The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments. + "A String", + ], "name": "A String", # The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: "projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default". }, ], diff --git a/docs/dyn/contactcenterinsights_v1.projects.locations.conversations.html b/docs/dyn/contactcenterinsights_v1.projects.locations.conversations.html index ccb1bb0b28d..a4bb4cd89e0 100644 --- a/docs/dyn/contactcenterinsights_v1.projects.locations.conversations.html +++ b/docs/dyn/contactcenterinsights_v1.projects.locations.conversations.html @@ -515,98 +515,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -627,6 +535,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -906,98 +818,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -1018,6 +838,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -1327,98 +1151,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -1439,6 +1171,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -1801,98 +1537,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -1913,6 +1557,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -2210,98 +1858,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -2322,6 +1878,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -2601,98 +2161,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -2713,6 +2181,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. @@ -2994,98 +2466,6 @@

Method Details

"question": "A String", # The corresponding FAQ question. "source": "A String", # The knowledge document that this answer was extracted from. Format: projects/{project}/knowledgeBases/{knowledge_base}/documents/{document}. }, - "generatorSuggestionResult": { # Represents response from generators. # The generator suggestion result. - "generatorSuggestion": { # Suggestion generated using a Generator. # The suggestion generated from the Generator. - "agentCoachingSuggestion": { # Suggestion for coaching agents. # Optional. Suggestion to coach the agent. - "agentActionSuggestions": [ # Optional. Suggested actions for the agent to take. - { # Actions suggested for the agent. This is based on applicable instructions. - "agentAction": "A String", # Optional. The suggested action for the agent. - }, - ], - "applicableInstructions": [ # Optional. Instructions applicable based on the current context. - { # Agent Coaching instructions that customer can configure. - "agentAction": "A String", # Optional. The action that human agent should take. For example, "apologize for the slow shipping". If the users only want to use agent coaching for intent detection, agent_action can be empty - "condition": "A String", # Optional. The condition of the instruction. For example, "the customer wants to cancel an order". If the users want the instruction to be triggered unconditionally, the condition can be empty. - "description": "A String", # Optional. The detailed description of this instruction. - "displayName": "A String", # Optional. Display name for the instruction. - "metadata": { # Optional. Additional information attached to this instruction. - "a_key": "A String", - }, - "systemAction": "A String", # Optional. The action that system should take. For example, "call GetOrderTime with order_number={order number provided by the customer}". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty - }, - ], - "sampleResponses": [ # Optional. Sample response for the Agent. - { # Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems. - "responseText": "A String", # Optional. Sample response for Agent in text. - }, - ], - "suggestionEval": { # Self evaluations of the suggestion. # Self evaluation of the suggestion. - "actionActionSuggestionEval": "A String", # Optional. Eval for Agent action suggestion. - "sampleResponseEval": "A String", # Optional. Eval for sample response. - }, - "suggestionReasoning": { # Reasoning for the suggestion. # Reasoning for the suggestion. - "agentActionTaken": "A String", # Optional. The actions that the agent has taken already. - "issueSummary": "A String", # Optional. Summary of the issue. - }, - }, - "freeFormSuggestion": { # Suggestion generated using free form generator. # Optional. Free form suggestion. - "labels": [ # Optional. Labels for the generator. - "A String", - ], - "response": "A String", # Required. Free form suggestion. - }, - "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary. - "summarySections": [ # Required. All the parts of generated summary. - { # A component of the generated summary. - "section": "A String", # Required. Name of the section. - "summary": "A String", # Required. Summary text for the section. - }, - ], - }, - }, - }, - "knowledgeAssistResult": { # Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query. # The Knowledge Assist result. - "suggestedQuery": { # Represents a suggested query. # The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion. - "queryText": "A String", # Suggested query text. - "score": 3.14, # Suggested query score. - }, - "suggestedQueryAnswer": { # Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers. # The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query. - "answerText": "A String", # The piece of text from the `source` that answers this suggested query. - "faqSource": { # Details about source of FAQ answer. # Populated if the prediction came from FAQ. - "document": "A String", # Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`. - "question": "A String", # The corresponding FAQ question. - }, - "generativeSource": { # Details about source of Generative answer. # Populated if the prediction was Generative. - "snippets": [ # All snippets used for this Generative Prediction, with their source URI and data. - { # Snippet Source for a Generative Prediction. - "document": "A String", # Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`. - "text": "A String", # text taken from that URI. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - ], - }, - "intentMatchingSource": { # Details about source of Intent Matching answer. # Populated if the prediction was from intent matching. - "title": "A String", # Title of the document. - "uri": "A String", # URI the data is sourced from. - }, - "matchConfidence": 3.14, # The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). - }, - }, - "knowledgeSearchResult": { # Represents a SearchKnowledge answer. # The Knowledge Search result. - "answer": "A String", # The piece of text from the knowledge base documents that answers the search query - "answerRecord": "A String", # The name of the answer record. Format: `projects//locations//answer Records/` - "answerSources": [ # All sources used to generate the answer. - { # The sources of the answers. - "document": "A String", # The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/` - "snippet": "A String", # The relevant snippet of the article. - "title": "A String", # The title of the article. - "uri": "A String", # The URI of the article. - }, - ], - "answerType": "A String", # The type of the answer. - "confidenceScore": 3.14, # The confidence score in [0.0, 1.0] range. - }, "smartComposeSuggestion": { # Agent Assist Smart Compose suggestion data. # Agent Assist Smart Compose suggestion data. "confidenceScore": 3.14, # The system's confidence score that this suggestion is a good match for this conversation, ranging from 0.0 (completely uncertain) to 1.0 (completely certain). "metadata": { # Map that contains metadata about the Smart Compose suggestion and the document from which it originates. @@ -3106,6 +2486,10 @@

Method Details

"transcriptIndex": 42, # The index in the sequence of transcribed pieces of the conversation where the boundary is located. This index starts at zero. "wordIndex": 42, # The word index of this boundary with respect to the first word in the transcript piece. This index starts at zero. }, + "userInput": { # Explicit input used for generating the answer # Explicit input used for generating the answer + "generatorName": "A String", # The resource name of associated generator. Format: `projects//locations//generators/` + "query": "A String", # Query text. Article Search uses this to store the input query used to generate the search results. + }, }, ], "startTime": "A String", # The time at which the conversation started. diff --git a/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html b/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html index 8dddf96a5fd..a0b02dcdae9 100644 --- a/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html +++ b/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html @@ -212,6 +212,7 @@

Method Details

}, }, ], + "useDiffBackup": True or False, # Optional. Enable differential backups. }, "state": "A String", # The current migration job state. "staticIpConnectivity": { # The source database will allow incoming connections from the public IP of the destination database. You can retrieve the public IP of the Cloud SQL instance from the Cloud SQL console or using Cloud SQL APIs. No additional configuration is required. # static ip connectivity data (default, no additional details needed). @@ -476,6 +477,7 @@

Method Details

}, }, ], + "useDiffBackup": True or False, # Optional. Enable differential backups. }, "state": "A String", # The current migration job state. "staticIpConnectivity": { # The source database will allow incoming connections from the public IP of the destination database. You can retrieve the public IP of the Cloud SQL instance from the Cloud SQL console or using Cloud SQL APIs. No additional configuration is required. # static ip connectivity data (default, no additional details needed). @@ -622,6 +624,7 @@

Method Details

}, }, ], + "useDiffBackup": True or False, # Optional. Enable differential backups. }, "state": "A String", # The current migration job state. "staticIpConnectivity": { # The source database will allow incoming connections from the public IP of the destination database. You can retrieve the public IP of the Cloud SQL instance from the Cloud SQL console or using Cloud SQL APIs. No additional configuration is required. # static ip connectivity data (default, no additional details needed). @@ -729,6 +732,7 @@

Method Details

}, }, ], + "useDiffBackup": True or False, # Optional. Enable differential backups. }, "state": "A String", # The current migration job state. "staticIpConnectivity": { # The source database will allow incoming connections from the public IP of the destination database. You can retrieve the public IP of the Cloud SQL instance from the Cloud SQL console or using Cloud SQL APIs. No additional configuration is required. # static ip connectivity data (default, no additional details needed). @@ -1170,6 +1174,7 @@

Method Details

}, }, ], + "useDiffBackup": True or False, # Optional. Enable differential backups. }, "state": "A String", # The current migration job state. "staticIpConnectivity": { # The source database will allow incoming connections from the public IP of the destination database. You can retrieve the public IP of the Cloud SQL instance from the Cloud SQL console or using Cloud SQL APIs. No additional configuration is required. # static ip connectivity data (default, no additional details needed). diff --git a/docs/dyn/dataplex_v1.projects.locations.dataScans.html b/docs/dyn/dataplex_v1.projects.locations.dataScans.html index 2b31d365b90..7d51b67f2e3 100644 --- a/docs/dyn/dataplex_v1.projects.locations.dataScans.html +++ b/docs/dyn/dataplex_v1.projects.locations.dataScans.html @@ -240,7 +240,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -272,7 +272,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -347,7 +347,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -506,7 +506,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -656,7 +656,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -688,7 +688,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -763,7 +763,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -988,7 +988,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -1020,7 +1020,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -1095,7 +1095,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -1281,7 +1281,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -1313,7 +1313,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -1388,7 +1388,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -1591,7 +1591,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -1623,7 +1623,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -1698,7 +1698,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. diff --git a/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html b/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html index df3fe99afd2..e22431801bc 100644 --- a/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html +++ b/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html @@ -142,7 +142,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -287,7 +287,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -319,7 +319,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -394,7 +394,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -548,7 +548,7 @@

Method Details

"rowCount": "A String", # The count of rows processed. "rules": [ # A list of all the rules in a job, and their results. { # DataQualityRuleResult provides a more detailed, per-rule view of the results. - "assertionRowCount": "A String", # Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules. + "assertionRowCount": "A String", # Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules. "evaluatedCount": "A String", # The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true. "failingRowsQuery": "A String", # The query to find rows that did not pass this rule.This field is only valid for row-level type rules. "nullCount": "A String", # The number of rows with null values in the specified column. @@ -580,7 +580,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. @@ -655,7 +655,7 @@

Method Details

"A String", ], }, - "sqlAssertion": { # Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. + "sqlAssertion": { # A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0 # Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails. "sqlStatement": "A String", # Optional. The SQL statement. }, "statisticRangeExpectation": { # Evaluates whether the column aggregate statistic lies between a specified range. # Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range. diff --git a/docs/dyn/dataplex_v1.projects.locations.html b/docs/dyn/dataplex_v1.projects.locations.html index 6b9462d01ef..bd73d38aa5b 100644 --- a/docs/dyn/dataplex_v1.projects.locations.html +++ b/docs/dyn/dataplex_v1.projects.locations.html @@ -312,7 +312,7 @@

Method Details

"nextPageToken": "A String", # Pagination token. "results": [ # The results matching the search query. { # A single result of a SearchEntries request. - "dataplexEntry": { # An entry is a representation of a data asset which can be described by various metadata. # Entry format of the result. + "dataplexEntry": { # An entry is a representation of a data asset which can be described by various metadata. "aspects": { # Optional. The Aspects attached to the Entry. The format for the key can be one of the following: 1. {projectId}.{locationId}.{aspectTypeId} (if the aspect is attached directly to the entry) 2. {projectId}.{locationId}.{aspectTypeId}@{path} (if the aspect is attached to an entry's path) "a_key": { # An aspect is a single piece of metadata describing an entry. "aspectSource": { # AspectSource contains source system related information for the aspect. diff --git a/docs/dyn/datastream_v1.projects.locations.streams.html b/docs/dyn/datastream_v1.projects.locations.streams.html index 7353070265f..599524fc3b2 100644 --- a/docs/dyn/datastream_v1.projects.locations.streams.html +++ b/docs/dyn/datastream_v1.projects.locations.streams.html @@ -439,6 +439,8 @@

Method Details

}, "sourceConnectionProfile": "A String", # Required. Source connection profile resoource. Format: `projects/{project}/locations/{location}/connectionProfiles/{name}` "sqlServerSourceConfig": { # SQLServer data source configuration # SQLServer data source configuration. + "changeTables": { # Configuration to use Change Tables CDC read method. # CDC reader reads from change tables. + }, "excludeObjects": { # SQLServer database structure. # SQLServer objects to exclude from the stream. "schemas": [ # SQLServer schemas in the database server. { # SQLServer schema. @@ -489,6 +491,8 @@

Method Details

}, "maxConcurrentBackfillTasks": 42, # Max concurrent backfill tasks. "maxConcurrentCdcTasks": 42, # Max concurrent CDC tasks. + "transactionLogs": { # Configuration to use Transaction Logs CDC read method. # CDC reader reads from transaction logs. + }, }, }, "state": "A String", # The state of the stream. @@ -899,6 +903,8 @@

Method Details

}, "sourceConnectionProfile": "A String", # Required. Source connection profile resoource. Format: `projects/{project}/locations/{location}/connectionProfiles/{name}` "sqlServerSourceConfig": { # SQLServer data source configuration # SQLServer data source configuration. + "changeTables": { # Configuration to use Change Tables CDC read method. # CDC reader reads from change tables. + }, "excludeObjects": { # SQLServer database structure. # SQLServer objects to exclude from the stream. "schemas": [ # SQLServer schemas in the database server. { # SQLServer schema. @@ -949,6 +955,8 @@

Method Details

}, "maxConcurrentBackfillTasks": 42, # Max concurrent backfill tasks. "maxConcurrentCdcTasks": 42, # Max concurrent CDC tasks. + "transactionLogs": { # Configuration to use Transaction Logs CDC read method. # CDC reader reads from transaction logs. + }, }, }, "state": "A String", # The state of the stream. @@ -1298,6 +1306,8 @@

Method Details

}, "sourceConnectionProfile": "A String", # Required. Source connection profile resoource. Format: `projects/{project}/locations/{location}/connectionProfiles/{name}` "sqlServerSourceConfig": { # SQLServer data source configuration # SQLServer data source configuration. + "changeTables": { # Configuration to use Change Tables CDC read method. # CDC reader reads from change tables. + }, "excludeObjects": { # SQLServer database structure. # SQLServer objects to exclude from the stream. "schemas": [ # SQLServer schemas in the database server. { # SQLServer schema. @@ -1348,6 +1358,8 @@

Method Details

}, "maxConcurrentBackfillTasks": 42, # Max concurrent backfill tasks. "maxConcurrentCdcTasks": 42, # Max concurrent CDC tasks. + "transactionLogs": { # Configuration to use Transaction Logs CDC read method. # CDC reader reads from transaction logs. + }, }, }, "state": "A String", # The state of the stream. @@ -1704,6 +1716,8 @@

Method Details

}, "sourceConnectionProfile": "A String", # Required. Source connection profile resoource. Format: `projects/{project}/locations/{location}/connectionProfiles/{name}` "sqlServerSourceConfig": { # SQLServer data source configuration # SQLServer data source configuration. + "changeTables": { # Configuration to use Change Tables CDC read method. # CDC reader reads from change tables. + }, "excludeObjects": { # SQLServer database structure. # SQLServer objects to exclude from the stream. "schemas": [ # SQLServer schemas in the database server. { # SQLServer schema. @@ -1754,6 +1768,8 @@

Method Details

}, "maxConcurrentBackfillTasks": 42, # Max concurrent backfill tasks. "maxConcurrentCdcTasks": 42, # Max concurrent CDC tasks. + "transactionLogs": { # Configuration to use Transaction Logs CDC read method. # CDC reader reads from transaction logs. + }, }, }, "state": "A String", # The state of the stream. diff --git a/docs/dyn/dialogflow_v2.projects.conversationProfiles.html b/docs/dyn/dialogflow_v2.projects.conversationProfiles.html index f072a9b9667..c23e7ee4e92 100644 --- a/docs/dyn/dialogflow_v2.projects.conversationProfiles.html +++ b/docs/dyn/dialogflow_v2.projects.conversationProfiles.html @@ -219,6 +219,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -273,6 +276,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -285,7 +291,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -398,6 +404,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -452,6 +461,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -464,7 +476,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -602,6 +614,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -656,6 +671,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -668,7 +686,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -792,6 +810,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -846,6 +867,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -858,7 +882,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -990,6 +1014,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1044,6 +1071,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1056,7 +1086,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -1170,6 +1200,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1224,6 +1257,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1236,7 +1272,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2.projects.generators.html b/docs/dyn/dialogflow_v2.projects.generators.html new file mode 100644 index 00000000000..6b077fe3bab --- /dev/null +++ b/docs/dyn/dialogflow_v2.projects.generators.html @@ -0,0 +1,333 @@ + + + +

Dialogflow API . projects . generators

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, generatorId=None, x__xgafv=None)

+

Creates a generator.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists generators.

+

+ list_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, generatorId=None, x__xgafv=None) +
Creates a generator.
+
+Args:
+  parent: string, Required. The project/location to create generator for. Format: `projects//locations/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  generatorId: string, Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists generators.
+
+Args:
+  parent: string, Required. The project/location to list generators for. Format: `projects//locations/` (required)
+  pageSize: integer, Optional. Maximum number of conversation models to return in a single page. Default to 10.
+  pageToken: string, Optional. The next_page_token value returned from a previous list request.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response of ListGenerators.
+  "generators": [ # List of generators retrieved.
+    { # LLM generator.
+      "createTime": "A String", # Output only. Creation time of this generator.
+      "description": "A String", # Optional. Human readable description of the generator.
+      "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+        "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+        "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+        "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+        "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+      },
+      "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+      "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+        "fewShotExamples": [ # Optional. List of few shot examples.
+          { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+            "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+              "messageEntries": [ # Optional. List of message transcripts in the conversation.
+                { # Represents a message entry of a conversation.
+                  "createTime": "A String", # Optional. Create time of the message entry.
+                  "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                  "role": "A String", # Optional. Participant role of the message.
+                  "text": "A String", # Optional. Transcript content of the message.
+                },
+              ],
+            },
+            "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+              "a_key": "A String",
+            },
+            "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+              "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+                "summarySections": [ # Required. All the parts of generated summary.
+                  { # A component of the generated summary.
+                    "section": "A String", # Required. Name of the section.
+                    "summary": "A String", # Required. Summary text for the section.
+                  },
+                ],
+              },
+            },
+            "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+              "summarizationSections": [ # Optional. Summarization sections.
+                { # Represents the section of summarization.
+                  "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                  "key": "A String", # Optional. Name of the section, for example, "situation".
+                  "type": "A String", # Optional. Type of the summarization section.
+                },
+              ],
+            },
+          },
+        ],
+        "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+        "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+          { # Represents the section of summarization.
+            "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+            "key": "A String", # Optional. Name of the section, for example, "situation".
+            "type": "A String", # Optional. Type of the summarization section.
+          },
+        ],
+        "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+      },
+      "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+      "updateTime": "A String", # Output only. Update time of this generator.
+    },
+  ],
+  "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2.projects.html b/docs/dyn/dialogflow_v2.projects.html index 6cb082c1778..1d689d4ea09 100644 --- a/docs/dyn/dialogflow_v2.projects.html +++ b/docs/dyn/dialogflow_v2.projects.html @@ -104,6 +104,11 @@

Instance Methods

Returns the conversations Resource.

+

+ generators() +

+

Returns the generators Resource.

+

knowledgeBases()

diff --git a/docs/dyn/dialogflow_v2.projects.locations.conversationProfiles.html b/docs/dyn/dialogflow_v2.projects.locations.conversationProfiles.html index c311eda691d..16ef84b53ad 100644 --- a/docs/dyn/dialogflow_v2.projects.locations.conversationProfiles.html +++ b/docs/dyn/dialogflow_v2.projects.locations.conversationProfiles.html @@ -219,6 +219,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -273,6 +276,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -285,7 +291,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -398,6 +404,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -452,6 +461,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -464,7 +476,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -602,6 +614,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -656,6 +671,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -668,7 +686,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -792,6 +810,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -846,6 +867,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -858,7 +882,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -990,6 +1014,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1044,6 +1071,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1056,7 +1086,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -1170,6 +1200,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1224,6 +1257,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1236,7 +1272,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2.projects.locations.generators.html b/docs/dyn/dialogflow_v2.projects.locations.generators.html new file mode 100644 index 00000000000..d38271e78e8 --- /dev/null +++ b/docs/dyn/dialogflow_v2.projects.locations.generators.html @@ -0,0 +1,577 @@ + + + +

Dialogflow API . projects . locations . generators

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, generatorId=None, x__xgafv=None)

+

Creates a generator.

+

+ delete(name, x__xgafv=None)

+

Deletes a generator.

+

+ get(name, x__xgafv=None)

+

Retrieves a generator.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists generators.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a generator.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, generatorId=None, x__xgafv=None) +
Creates a generator.
+
+Args:
+  parent: string, Required. The project/location to create generator for. Format: `projects//locations/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  generatorId: string, Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a generator.
+
+Args:
+  name: string, Required. The generator resource name to delete. Format: `projects//locations//generators/` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Retrieves a generator.
+
+Args:
+  name: string, Required. The generator resource name to retrieve. Format: `projects//locations/`/generators/` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists generators.
+
+Args:
+  parent: string, Required. The project/location to list generators for. Format: `projects//locations/` (required)
+  pageSize: integer, Optional. Maximum number of conversation models to return in a single page. Default to 10.
+  pageToken: string, Optional. The next_page_token value returned from a previous list request.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response of ListGenerators.
+  "generators": [ # List of generators retrieved.
+    { # LLM generator.
+      "createTime": "A String", # Output only. Creation time of this generator.
+      "description": "A String", # Optional. Human readable description of the generator.
+      "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+        "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+        "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+        "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+        "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+      },
+      "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+      "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+        "fewShotExamples": [ # Optional. List of few shot examples.
+          { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+            "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+              "messageEntries": [ # Optional. List of message transcripts in the conversation.
+                { # Represents a message entry of a conversation.
+                  "createTime": "A String", # Optional. Create time of the message entry.
+                  "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                  "role": "A String", # Optional. Participant role of the message.
+                  "text": "A String", # Optional. Transcript content of the message.
+                },
+              ],
+            },
+            "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+              "a_key": "A String",
+            },
+            "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+              "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+                "summarySections": [ # Required. All the parts of generated summary.
+                  { # A component of the generated summary.
+                    "section": "A String", # Required. Name of the section.
+                    "summary": "A String", # Required. Summary text for the section.
+                  },
+                ],
+              },
+            },
+            "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+              "summarizationSections": [ # Optional. Summarization sections.
+                { # Represents the section of summarization.
+                  "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                  "key": "A String", # Optional. Name of the section, for example, "situation".
+                  "type": "A String", # Optional. Type of the summarization section.
+                },
+              ],
+            },
+          },
+        ],
+        "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+        "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+          { # Represents the section of summarization.
+            "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+            "key": "A String", # Optional. Name of the section, for example, "situation".
+            "type": "A String", # Optional. Type of the summarization section.
+          },
+        ],
+        "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+      },
+      "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+      "updateTime": "A String", # Output only. Update time of this generator.
+    },
+  ],
+  "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a generator.
+
+Args:
+  name: string, Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  updateMask: string, Optional. The list of fields to update.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2.projects.locations.html b/docs/dyn/dialogflow_v2.projects.locations.html index 98b78dd3c20..11b9e7ab32f 100644 --- a/docs/dyn/dialogflow_v2.projects.locations.html +++ b/docs/dyn/dialogflow_v2.projects.locations.html @@ -104,6 +104,11 @@

Instance Methods

Returns the conversations Resource.

+

+ generators() +

+

Returns the generators Resource.

+

knowledgeBases()

@@ -114,6 +119,11 @@

Instance Methods

Returns the operations Resource.

+

+ statelessSuggestion() +

+

Returns the statelessSuggestion Resource.

+

suggestions()

diff --git a/docs/dyn/dialogflow_v2.projects.locations.statelessSuggestion.html b/docs/dyn/dialogflow_v2.projects.locations.statelessSuggestion.html new file mode 100644 index 00000000000..53066f0dc86 --- /dev/null +++ b/docs/dyn/dialogflow_v2.projects.locations.statelessSuggestion.html @@ -0,0 +1,197 @@ + + + +

Dialogflow API . projects . locations . statelessSuggestion

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ generate(parent, body=None, x__xgafv=None)

+

Generates and returns a suggestion for a conversation that does not have a resource created for it.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ generate(parent, body=None, x__xgafv=None) +
Generates and returns a suggestion for a conversation that does not have a resource created for it.
+
+Args:
+  parent: string, Required. The parent resource to charge for the Suggestion's generation. Format: `projects//locations/`. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # The request message for Conversations.GenerateStatelessSuggestion.
+  "conversationContext": { # Context of the conversation, including transcripts. # Optional. Context of the conversation, including transcripts.
+    "messageEntries": [ # Optional. List of message transcripts in the conversation.
+      { # Represents a message entry of a conversation.
+        "createTime": "A String", # Optional. Create time of the message entry.
+        "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+        "role": "A String", # Optional. Participant role of the message.
+        "text": "A String", # Optional. Transcript content of the message.
+      },
+    ],
+  },
+  "generator": { # LLM generator. # Uncreated generator. It should be a complete generator that includes all information about the generator.
+    "createTime": "A String", # Output only. Creation time of this generator.
+    "description": "A String", # Optional. Human readable description of the generator.
+    "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+      "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+      "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+      "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+      "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+    },
+    "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+    "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+      "fewShotExamples": [ # Optional. List of few shot examples.
+        { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+          "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+            "messageEntries": [ # Optional. List of message transcripts in the conversation.
+              { # Represents a message entry of a conversation.
+                "createTime": "A String", # Optional. Create time of the message entry.
+                "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                "role": "A String", # Optional. Participant role of the message.
+                "text": "A String", # Optional. Transcript content of the message.
+              },
+            ],
+          },
+          "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+            "a_key": "A String",
+          },
+          "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+            "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+              "summarySections": [ # Required. All the parts of generated summary.
+                { # A component of the generated summary.
+                  "section": "A String", # Required. Name of the section.
+                  "summary": "A String", # Required. Summary text for the section.
+                },
+              ],
+            },
+          },
+          "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+            "summarizationSections": [ # Optional. Summarization sections.
+              { # Represents the section of summarization.
+                "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                "key": "A String", # Optional. Name of the section, for example, "situation".
+                "type": "A String", # Optional. Type of the summarization section.
+              },
+            ],
+          },
+        },
+      ],
+      "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+      "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+        { # Represents the section of summarization.
+          "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+          "key": "A String", # Optional. Name of the section, for example, "situation".
+          "type": "A String", # Optional. Type of the summarization section.
+        },
+      ],
+      "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+    },
+    "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+    "updateTime": "A String", # Output only. Update time of this generator.
+  },
+  "generatorName": "A String", # The resource name of the existing created generator. Format: `projects//locations//generators/`
+  "triggerEvents": [ # Optional. A list of trigger events. Generator will be triggered only if it's trigger event is included here.
+    "A String",
+  ],
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # The response message for Conversations.GenerateStatelessSuggestion.
+  "generatorSuggestion": { # Suggestion generated using a Generator. # Required. Generated suggestion for a conversation.
+    "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+      "summarySections": [ # Required. All the parts of generated summary.
+        { # A component of the generated summary.
+          "section": "A String", # Required. Name of the section.
+          "summary": "A String", # Required. Summary text for the section.
+        },
+      ],
+    },
+  },
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2.projects.locations.suggestions.html b/docs/dyn/dialogflow_v2.projects.locations.suggestions.html index 9ec0c2aa413..5f04593c75d 100644 --- a/docs/dyn/dialogflow_v2.projects.locations.suggestions.html +++ b/docs/dyn/dialogflow_v2.projects.locations.suggestions.html @@ -159,6 +159,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -213,6 +216,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -225,7 +231,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2.projects.suggestions.html b/docs/dyn/dialogflow_v2.projects.suggestions.html index 6a8fcbbd97e..786ad5780c6 100644 --- a/docs/dyn/dialogflow_v2.projects.suggestions.html +++ b/docs/dyn/dialogflow_v2.projects.suggestions.html @@ -159,6 +159,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -213,6 +216,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -225,7 +231,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2beta1.projects.agent.environments.users.sessions.html b/docs/dyn/dialogflow_v2beta1.projects.agent.environments.users.sessions.html index 00f23541dab..3630c61330e 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.agent.environments.users.sessions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.agent.environments.users.sessions.html @@ -152,6 +152,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.agent.sessions.html b/docs/dyn/dialogflow_v2beta1.projects.agent.sessions.html index fc78f69088d..b4eec144ec8 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.agent.sessions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.agent.sessions.html @@ -152,6 +152,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.conversationProfiles.html b/docs/dyn/dialogflow_v2beta1.projects.conversationProfiles.html index cdd33a755da..a0b7c0002c3 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.conversationProfiles.html +++ b/docs/dyn/dialogflow_v2beta1.projects.conversationProfiles.html @@ -219,6 +219,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -273,6 +276,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -285,7 +291,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -398,6 +404,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -452,6 +461,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -464,7 +476,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -602,6 +614,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -656,6 +671,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -668,7 +686,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -792,6 +810,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -846,6 +867,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -858,7 +882,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -990,6 +1014,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1044,6 +1071,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1056,7 +1086,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -1170,6 +1200,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1224,6 +1257,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1236,7 +1272,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2beta1.projects.conversations.participants.html b/docs/dyn/dialogflow_v2beta1.projects.conversations.participants.html index 09b24e78ae6..b5f0c0d787a 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.conversations.participants.html +++ b/docs/dyn/dialogflow_v2beta1.projects.conversations.participants.html @@ -124,6 +124,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.generators.html b/docs/dyn/dialogflow_v2beta1.projects.generators.html new file mode 100644 index 00000000000..a8e02a245a3 --- /dev/null +++ b/docs/dyn/dialogflow_v2beta1.projects.generators.html @@ -0,0 +1,333 @@ + + + +

Dialogflow API . projects . generators

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, generatorId=None, x__xgafv=None)

+

Creates a generator.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists generators.

+

+ list_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, generatorId=None, x__xgafv=None) +
Creates a generator.
+
+Args:
+  parent: string, Required. The project/location to create generator for. Format: `projects//locations/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  generatorId: string, Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists generators.
+
+Args:
+  parent: string, Required. The project/location to list generators for. Format: `projects//locations/` (required)
+  pageSize: integer, Optional. Maximum number of conversation models to return in a single page. Default to 10.
+  pageToken: string, Optional. The next_page_token value returned from a previous list request.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response of ListGenerators.
+  "generators": [ # List of generators retrieved.
+    { # LLM generator.
+      "createTime": "A String", # Output only. Creation time of this generator.
+      "description": "A String", # Optional. Human readable description of the generator.
+      "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+        "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+        "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+        "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+        "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+      },
+      "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+      "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+        "fewShotExamples": [ # Optional. List of few shot examples.
+          { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+            "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+              "messageEntries": [ # Optional. List of message transcripts in the conversation.
+                { # Represents a message entry of a conversation.
+                  "createTime": "A String", # Optional. Create time of the message entry.
+                  "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                  "role": "A String", # Optional. Participant role of the message.
+                  "text": "A String", # Optional. Transcript content of the message.
+                },
+              ],
+            },
+            "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+              "a_key": "A String",
+            },
+            "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+              "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+                "summarySections": [ # Required. All the parts of generated summary.
+                  { # A component of the generated summary.
+                    "section": "A String", # Required. Name of the section.
+                    "summary": "A String", # Required. Summary text for the section.
+                  },
+                ],
+              },
+            },
+            "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+              "summarizationSections": [ # Optional. Summarization sections.
+                { # Represents the section of summarization.
+                  "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                  "key": "A String", # Optional. Name of the section, for example, "situation".
+                  "type": "A String", # Optional. Type of the summarization section.
+                },
+              ],
+            },
+          },
+        ],
+        "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+        "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+          { # Represents the section of summarization.
+            "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+            "key": "A String", # Optional. Name of the section, for example, "situation".
+            "type": "A String", # Optional. Type of the summarization section.
+          },
+        ],
+        "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+      },
+      "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+      "updateTime": "A String", # Output only. Update time of this generator.
+    },
+  ],
+  "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2beta1.projects.html b/docs/dyn/dialogflow_v2beta1.projects.html index 13f3be6396e..5508dae56c2 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.html +++ b/docs/dyn/dialogflow_v2beta1.projects.html @@ -94,6 +94,11 @@

Instance Methods

Returns the conversations Resource.

+

+ generators() +

+

Returns the generators Resource.

+

knowledgeBases()

diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.agent.environments.users.sessions.html b/docs/dyn/dialogflow_v2beta1.projects.locations.agent.environments.users.sessions.html index d8c40ef4e85..96597bba693 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.agent.environments.users.sessions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.agent.environments.users.sessions.html @@ -152,6 +152,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.agent.sessions.html b/docs/dyn/dialogflow_v2beta1.projects.locations.agent.sessions.html index 2063ce7e05b..f07baee3476 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.agent.sessions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.agent.sessions.html @@ -152,6 +152,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.conversationProfiles.html b/docs/dyn/dialogflow_v2beta1.projects.locations.conversationProfiles.html index a6d871e7ed5..d8284cfe6b5 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.conversationProfiles.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.conversationProfiles.html @@ -219,6 +219,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -273,6 +276,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -285,7 +291,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -398,6 +404,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -452,6 +461,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -464,7 +476,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -602,6 +614,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -656,6 +671,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -668,7 +686,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -792,6 +810,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -846,6 +867,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -858,7 +882,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -990,6 +1014,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1044,6 +1071,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1056,7 +1086,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. @@ -1170,6 +1200,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -1224,6 +1257,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -1236,7 +1272,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.conversations.participants.html b/docs/dyn/dialogflow_v2beta1.projects.locations.conversations.participants.html index ccdccc05718..f1f5c2c0b1b 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.conversations.participants.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.conversations.participants.html @@ -124,6 +124,7 @@

Method Details

"noBargeInDuration": "A String", # Duration that is not eligible for barge-in at the beginning of the input audio. "totalDuration": "A String", # Total duration for the playback at the beginning of the input audio. }, + "defaultNoSpeechTimeout": "A String", # If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. "disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent. "enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend. "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.generators.html b/docs/dyn/dialogflow_v2beta1.projects.locations.generators.html new file mode 100644 index 00000000000..d94baf7cfc6 --- /dev/null +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.generators.html @@ -0,0 +1,577 @@ + + + +

Dialogflow API . projects . locations . generators

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, generatorId=None, x__xgafv=None)

+

Creates a generator.

+

+ delete(name, x__xgafv=None)

+

Deletes a generator.

+

+ get(name, x__xgafv=None)

+

Retrieves a generator.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists generators.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a generator.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, generatorId=None, x__xgafv=None) +
Creates a generator.
+
+Args:
+  parent: string, Required. The project/location to create generator for. Format: `projects//locations/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  generatorId: string, Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a generator.
+
+Args:
+  name: string, Required. The generator resource name to delete. Format: `projects//locations//generators/` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Retrieves a generator.
+
+Args:
+  name: string, Required. The generator resource name to retrieve. Format: `projects//locations/`/generators/` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists generators.
+
+Args:
+  parent: string, Required. The project/location to list generators for. Format: `projects//locations/` (required)
+  pageSize: integer, Optional. Maximum number of conversation models to return in a single page. Default to 10.
+  pageToken: string, Optional. The next_page_token value returned from a previous list request.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response of ListGenerators.
+  "generators": [ # List of generators retrieved.
+    { # LLM generator.
+      "createTime": "A String", # Output only. Creation time of this generator.
+      "description": "A String", # Optional. Human readable description of the generator.
+      "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+        "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+        "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+        "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+        "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+      },
+      "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+      "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+        "fewShotExamples": [ # Optional. List of few shot examples.
+          { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+            "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+              "messageEntries": [ # Optional. List of message transcripts in the conversation.
+                { # Represents a message entry of a conversation.
+                  "createTime": "A String", # Optional. Create time of the message entry.
+                  "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                  "role": "A String", # Optional. Participant role of the message.
+                  "text": "A String", # Optional. Transcript content of the message.
+                },
+              ],
+            },
+            "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+              "a_key": "A String",
+            },
+            "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+              "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+                "summarySections": [ # Required. All the parts of generated summary.
+                  { # A component of the generated summary.
+                    "section": "A String", # Required. Name of the section.
+                    "summary": "A String", # Required. Summary text for the section.
+                  },
+                ],
+              },
+            },
+            "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+              "summarizationSections": [ # Optional. Summarization sections.
+                { # Represents the section of summarization.
+                  "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                  "key": "A String", # Optional. Name of the section, for example, "situation".
+                  "type": "A String", # Optional. Type of the summarization section.
+                },
+              ],
+            },
+          },
+        ],
+        "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+        "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+          { # Represents the section of summarization.
+            "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+            "key": "A String", # Optional. Name of the section, for example, "situation".
+            "type": "A String", # Optional. Type of the summarization section.
+          },
+        ],
+        "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+      },
+      "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+      "updateTime": "A String", # Output only. Update time of this generator.
+    },
+  ],
+  "nextPageToken": "A String", # Token to retrieve the next page of results, or empty if there are no more results in the list.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a generator.
+
+Args:
+  name: string, Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+  updateMask: string, Optional. The list of fields to update.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # LLM generator.
+  "createTime": "A String", # Output only. Creation time of this generator.
+  "description": "A String", # Optional. Human readable description of the generator.
+  "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+    "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+    "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+    "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+    "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+  },
+  "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+  "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+    "fewShotExamples": [ # Optional. List of few shot examples.
+      { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+        "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+          "messageEntries": [ # Optional. List of message transcripts in the conversation.
+            { # Represents a message entry of a conversation.
+              "createTime": "A String", # Optional. Create time of the message entry.
+              "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+              "role": "A String", # Optional. Participant role of the message.
+              "text": "A String", # Optional. Transcript content of the message.
+            },
+          ],
+        },
+        "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+          "a_key": "A String",
+        },
+        "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+          "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+            "summarySections": [ # Required. All the parts of generated summary.
+              { # A component of the generated summary.
+                "section": "A String", # Required. Name of the section.
+                "summary": "A String", # Required. Summary text for the section.
+              },
+            ],
+          },
+        },
+        "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+          "summarizationSections": [ # Optional. Summarization sections.
+            { # Represents the section of summarization.
+              "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+              "key": "A String", # Optional. Name of the section, for example, "situation".
+              "type": "A String", # Optional. Type of the summarization section.
+            },
+          ],
+        },
+      },
+    ],
+    "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+    "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+      { # Represents the section of summarization.
+        "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+        "key": "A String", # Optional. Name of the section, for example, "situation".
+        "type": "A String", # Optional. Type of the summarization section.
+      },
+    ],
+    "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+  },
+  "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+  "updateTime": "A String", # Output only. Update time of this generator.
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.html b/docs/dyn/dialogflow_v2beta1.projects.locations.html index 94406a0d441..ea91938d5ad 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.html @@ -94,6 +94,11 @@

Instance Methods

Returns the conversations Resource.

+

+ generators() +

+

Returns the generators Resource.

+

knowledgeBases()

@@ -104,6 +109,11 @@

Instance Methods

Returns the operations Resource.

+

+ statelessSuggestion() +

+

Returns the statelessSuggestion Resource.

+

suggestions()

diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.statelessSuggestion.html b/docs/dyn/dialogflow_v2beta1.projects.locations.statelessSuggestion.html new file mode 100644 index 00000000000..5366a3c5c51 --- /dev/null +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.statelessSuggestion.html @@ -0,0 +1,197 @@ + + + +

Dialogflow API . projects . locations . statelessSuggestion

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ generate(parent, body=None, x__xgafv=None)

+

Generates and returns a suggestion for a conversation that does not have a resource created for it.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ generate(parent, body=None, x__xgafv=None) +
Generates and returns a suggestion for a conversation that does not have a resource created for it.
+
+Args:
+  parent: string, Required. The parent resource to charge for the Suggestion's generation. Format: `projects//locations/`. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # The request message for Conversations.GenerateStatelessSuggestion.
+  "conversationContext": { # Context of the conversation, including transcripts. # Optional. Context of the conversation, including transcripts.
+    "messageEntries": [ # Optional. List of message transcripts in the conversation.
+      { # Represents a message entry of a conversation.
+        "createTime": "A String", # Optional. Create time of the message entry.
+        "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+        "role": "A String", # Optional. Participant role of the message.
+        "text": "A String", # Optional. Transcript content of the message.
+      },
+    ],
+  },
+  "generator": { # LLM generator. # Uncreated generator. It should be a complete generator that includes all information about the generator.
+    "createTime": "A String", # Output only. Creation time of this generator.
+    "description": "A String", # Optional. Human readable description of the generator.
+    "inferenceParameter": { # The parameters of inference. # Optional. Inference parameters for this generator.
+      "maxOutputTokens": 42, # Optional. Maximum number of the output tokens for the generator.
+      "temperature": 3.14, # Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
+      "topK": 42, # Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
+      "topP": 3.14, # Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
+    },
+    "name": "A String", # Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`
+    "summarizationContext": { # Summarization context that customer can configure. # Input of prebuilt Summarization feature.
+      "fewShotExamples": [ # Optional. List of few shot examples.
+        { # Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10
+          "conversationContext": { # Context of the conversation, including transcripts. # Optional. Conversation transcripts.
+            "messageEntries": [ # Optional. List of message transcripts in the conversation.
+              { # Represents a message entry of a conversation.
+                "createTime": "A String", # Optional. Create time of the message entry.
+                "languageCode": "A String", # Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
+                "role": "A String", # Optional. Participant role of the message.
+                "text": "A String", # Optional. Transcript content of the message.
+              },
+            ],
+          },
+          "extraInfo": { # Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
+            "a_key": "A String",
+          },
+          "output": { # Suggestion generated using a Generator. # Required. Example output of the model.
+            "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+              "summarySections": [ # Required. All the parts of generated summary.
+                { # A component of the generated summary.
+                  "section": "A String", # Required. Name of the section.
+                  "summary": "A String", # Required. Summary text for the section.
+                },
+              ],
+            },
+          },
+          "summarizationSectionList": { # List of summarization sections. # Summarization sections.
+            "summarizationSections": [ # Optional. Summarization sections.
+              { # Represents the section of summarization.
+                "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+                "key": "A String", # Optional. Name of the section, for example, "situation".
+                "type": "A String", # Optional. Type of the summarization section.
+              },
+            ],
+          },
+        },
+      ],
+      "outputLanguageCode": "A String", # Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.
+      "summarizationSections": [ # Optional. List of sections. Note it contains both predefined section sand customer defined sections.
+        { # Represents the section of summarization.
+          "definition": "A String", # Optional. Definition of the section, for example, "what the customer needs help with or has question about."
+          "key": "A String", # Optional. Name of the section, for example, "situation".
+          "type": "A String", # Optional. Type of the summarization section.
+        },
+      ],
+      "version": "A String", # Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].
+    },
+    "triggerEvent": "A String", # Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
+    "updateTime": "A String", # Output only. Update time of this generator.
+  },
+  "generatorName": "A String", # The resource name of the existing created generator. Format: `projects//locations//generators/`
+  "triggerEvents": [ # Optional. A list of trigger events. Generator will be triggered only if it's trigger event is included here.
+    "A String",
+  ],
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # The response message for Conversations.GenerateStatelessSuggestion.
+  "generatorSuggestion": { # Suggestion generated using a Generator. # Required. Generated suggestion for a conversation.
+    "summarySuggestion": { # Suggested summary of the conversation. # Optional. Suggested summary.
+      "summarySections": [ # Required. All the parts of generated summary.
+        { # A component of the generated summary.
+          "section": "A String", # Required. Name of the section.
+          "summary": "A String", # Required. Summary text for the section.
+        },
+      ],
+    },
+  },
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/dialogflow_v2beta1.projects.locations.suggestions.html b/docs/dyn/dialogflow_v2beta1.projects.locations.suggestions.html index 233eb607534..462615c5026 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.locations.suggestions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.locations.suggestions.html @@ -159,6 +159,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -213,6 +216,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -225,7 +231,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/dialogflow_v2beta1.projects.suggestions.html b/docs/dyn/dialogflow_v2beta1.projects.suggestions.html index fa17eb6b68c..a06739ce92b 100644 --- a/docs/dyn/dialogflow_v2beta1.projects.suggestions.html +++ b/docs/dyn/dialogflow_v2beta1.projects.suggestions.html @@ -159,6 +159,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "humanAgentSuggestionConfig": { # Detail human agent assistant config. # Configuration for agent assistance of human agent participant. @@ -213,6 +216,9 @@

Method Details

}, }, ], + "generators": [ # Optional. List of various generator resource names used in the conversation profile. + "A String", + ], "groupSuggestionResponses": True or False, # If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse. }, "messageAnalysisConfig": { # Configuration for analyses to run on each conversation message. # Configuration for message analysis. @@ -225,7 +231,7 @@

Method Details

}, }, "humanAgentHandoffConfig": { # Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation. Currently, this feature is not general available, please contact Google to get access. # Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. - "livePersonConfig": { # Configuration specific to LivePerson (https://www.liveperson.com). # Uses LivePerson (https://www.liveperson.com). + "livePersonConfig": { # Configuration specific to [LivePerson](https://www.liveperson.com). # Uses [LivePerson](https://www.liveperson.com). "accountNumber": "A String", # Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. }, "salesforceLiveAgentConfig": { # Configuration specific to Salesforce Live Agent. # Uses Salesforce Live Agent. diff --git a/docs/dyn/discoveryengine_v1.projects.html b/docs/dyn/discoveryengine_v1.projects.html index e66ad345760..a0cab0f983d 100644 --- a/docs/dyn/discoveryengine_v1.projects.html +++ b/docs/dyn/discoveryengine_v1.projects.html @@ -87,10 +87,56 @@

Instance Methods

close()

Close httplib2 connections.

+

+ provision(name, body=None, x__xgafv=None)

+

Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.

Method Details

close()
Close httplib2 connections.
+
+ provision(name, body=None, x__xgafv=None) +
Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.
+
+Args:
+  name: string, Required. Full resource name of a Project, such as `projects/{project_id_or_number}`. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request for ProjectService.ProvisionProject method.
+  "acceptDataUseTerms": True or False, # Required. Set to `true` to specify that caller has read and would like to give consent to the [Terms for data use](https://cloud.google.com/retail/data-use-terms).
+  "dataUseTermsVersion": "A String", # Required. The version of the [Terms for data use](https://cloud.google.com/retail/data-use-terms) that caller has read and would like to give consent to. Acceptable version is `2022-11-23`, and this may change over time.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.controls.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.controls.html new file mode 100644 index 00000000000..f59b3a42e4a --- /dev/null +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.html index 6e9c0342494..10c48ae910d 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -391,7 +396,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.servingConfigs.html index b9d839c8997..5b151769ed0 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.servingConfigs.html @@ -103,6 +103,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -143,6 +144,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -229,6 +235,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -242,6 +251,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -344,6 +356,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -356,6 +369,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.sessions.answers.html index b3cf300ac84..c20dfd800ac 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.userEvents.html index f7854fa13fe..3b3eb98ef72 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.dataStores.userEvents.html @@ -84,7 +84,7 @@

Instance Methods

import_(parent, body=None, x__xgafv=None)

Bulk import of user events. Request processing might be synchronous. Events that already exist are skipped. Use this method for backfilling historical user events. Operation.response is of type ImportResponse. Note that it is possible for a subset of the items to be successfully inserted. Operation.metadata is of type ImportMetadata.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -169,6 +169,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -181,6 +182,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -260,7 +262,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -284,6 +286,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -296,6 +299,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -342,6 +346,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -366,6 +371,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -378,6 +384,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.controls.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.controls.html new file mode 100644 index 00000000000..4c1aa00df23 --- /dev/null +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . engines . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.html index 142ee0fa637..b005243c079 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.html @@ -74,6 +74,11 @@

Discovery Engine API . projects . locations . collections . engines

Instance Methods

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -144,7 +149,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -256,7 +261,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -309,7 +314,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -369,7 +374,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -411,7 +416,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.servingConfigs.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.servingConfigs.html index da30042b1b7..955f5a2c848 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.servingConfigs.html @@ -103,6 +103,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -143,6 +144,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -229,6 +235,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -242,6 +251,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -344,6 +356,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -356,6 +369,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.sessions.answers.html b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.sessions.answers.html index 07ddeff4121..9630d8495b8 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.collections.engines.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.controls.html b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.controls.html new file mode 100644 index 00000000000..6d7cd08683e --- /dev/null +++ b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.html b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.html index f57c9ee0cab..bd408210aac 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -391,7 +396,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.servingConfigs.html index ac04d6324a0..77ba51ae482 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.servingConfigs.html @@ -103,6 +103,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -143,6 +144,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -229,6 +235,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -242,6 +251,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -344,6 +356,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -356,6 +369,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.sessions.answers.html index 46c65eea803..7d6e96d8c4b 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.userEvents.html index e663fb909bd..5fd7a663c0c 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.dataStores.userEvents.html @@ -84,7 +84,7 @@

Instance Methods

import_(parent, body=None, x__xgafv=None)

Bulk import of user events. Request processing might be synchronous. Events that already exist are skipped. Use this method for backfilling historical user events. Operation.response is of type ImportResponse. Note that it is possible for a subset of the items to be successfully inserted. Operation.metadata is of type ImportMetadata.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -169,6 +169,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -181,6 +182,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -260,7 +262,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -284,6 +286,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -296,6 +299,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -342,6 +346,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -366,6 +371,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -378,6 +384,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.rankingConfigs.html b/docs/dyn/discoveryengine_v1.projects.locations.rankingConfigs.html index 00a327e2e50..02e3420ee47 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.rankingConfigs.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.rankingConfigs.html @@ -108,6 +108,9 @@

Method Details

}, ], "topN": 42, # The number of results to return. If this is unset or no bigger than zero, returns all results. + "userLabels": { # The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details. + "a_key": "A String", + }, } x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1.projects.locations.userEvents.html b/docs/dyn/discoveryengine_v1.projects.locations.userEvents.html index 47d430d50de..36084ab3dc4 100644 --- a/docs/dyn/discoveryengine_v1.projects.locations.userEvents.html +++ b/docs/dyn/discoveryengine_v1.projects.locations.userEvents.html @@ -78,7 +78,7 @@

Instance Methods

close()

Close httplib2 connections.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -87,7 +87,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -111,6 +111,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -123,6 +124,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -169,6 +171,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -193,6 +196,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -205,6 +209,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.branches.documents.chunks.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.branches.documents.chunks.html index 6c110970301..606c23f4de4 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.branches.documents.chunks.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.branches.documents.chunks.html @@ -132,7 +132,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }
@@ -180,7 +180,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, ], "nextPageToken": "A String", # A token that can be sent as ListChunksRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.controls.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.controls.html new file mode 100644 index 00000000000..be6b18e1f39 --- /dev/null +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.conversations.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.conversations.html index 021a0864f99..c566f280121 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.conversations.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.conversations.html @@ -408,7 +408,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.customModels.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.customModels.html index 36ba18b1a73..69560f2c756 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.customModels.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.customModels.html @@ -106,7 +106,7 @@

Method Details

"createTime": "A String", # Timestamp the Model was created at. "displayName": "A String", # The display name of the model. "modelState": "A String", # The state that the model is in (e.g.`TRAINING` or `TRAINING_FAILED`). - "modelVersion": "A String", + "modelVersion": "A String", # The version of the model. "name": "A String", # Required. The fully qualified resource name of the model. Format: `projects/{project_number}/locations/{location}/collections/{collection}/dataStores/{data_store}/customTuningModels/{custom_tuning_model}` model must be an alpha-numerical string with limit of 40 characters. "trainingStartTime": "A String", # Timestamp the model training was initiated. }, diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.html index a5edbe330f6..c28b15e8bc3 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -277,6 +282,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -434,6 +442,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -522,7 +533,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. @@ -601,6 +612,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -705,6 +719,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -791,6 +808,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.schemas.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.schemas.html index 3ecc0ec2cdf..f8c51edbdbf 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.schemas.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.schemas.html @@ -129,6 +129,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -232,6 +235,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -276,6 +282,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -326,6 +335,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.servingConfigs.html index aa12a3db303..7ab0cd7e71f 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -170,6 +171,11 @@

Method Details

"customFineTuningSpec": { # Defines custom fine tuning spec. # Custom fine tuning configs. "enableSearchAdaptor": True or False, # Whether or not to enable and include custom fine tuned search adaptor model. }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -257,6 +263,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -270,6 +279,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -387,7 +399,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -494,7 +506,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -607,7 +619,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -703,7 +715,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -790,6 +802,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -802,6 +815,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -952,7 +966,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -1136,7 +1150,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.sessions.answers.html index f869b9a56d5..e41431c2462 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.userEvents.html index 57282b43d03..cefa65fd701 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.dataStores.userEvents.html @@ -87,7 +87,7 @@

Instance Methods

purge(parent, body=None, x__xgafv=None)

Deletes permanently all user events specified by the filter provided. Depending on the number of events specified by the filter, this operation could take hours or days to complete. To test a filter, use the list command first.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -172,6 +172,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -184,6 +185,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -306,7 +308,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -330,6 +332,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -342,6 +345,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -388,6 +392,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -412,6 +417,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -424,6 +430,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.controls.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.controls.html new file mode 100644 index 00000000000..29fb81a6436 --- /dev/null +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . engines . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.conversations.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.conversations.html index 15e4f6ffbc8..2ce107dc657 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.conversations.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.conversations.html @@ -408,7 +408,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.html index 9ad76823954..e74ee7a8288 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.html @@ -74,6 +74,11 @@

Discovery Engine API . projects . locations . collections . engines

Instance Methods

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -153,7 +158,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -282,7 +287,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -352,7 +357,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -429,7 +434,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -488,7 +493,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -559,7 +564,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -630,7 +635,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.servingConfigs.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.servingConfigs.html index 7751d28885d..2a47462ccd9 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -170,6 +171,11 @@

Method Details

"customFineTuningSpec": { # Defines custom fine tuning spec. # Custom fine tuning configs. "enableSearchAdaptor": True or False, # Whether or not to enable and include custom fine tuned search adaptor model. }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -257,6 +263,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -270,6 +279,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -387,7 +399,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -494,7 +506,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -607,7 +619,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -703,7 +715,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -790,6 +802,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -802,6 +815,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -952,7 +966,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -1136,7 +1150,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.sessions.answers.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.sessions.answers.html index 1833af22bc1..3740ce2341a 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.collections.engines.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.branches.documents.chunks.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.branches.documents.chunks.html index 1873622f2aa..cfcc2c0e48d 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.branches.documents.chunks.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.branches.documents.chunks.html @@ -132,7 +132,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }
@@ -180,7 +180,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, ], "nextPageToken": "A String", # A token that can be sent as ListChunksRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.controls.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.controls.html new file mode 100644 index 00000000000..a86e1e8d402 --- /dev/null +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.conversations.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.conversations.html index 39e4052cdca..5f58cea8957 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.conversations.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.conversations.html @@ -408,7 +408,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.html index 8660b130ee8..e0efabd8391 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -269,6 +274,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -426,6 +434,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -514,7 +525,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. @@ -593,6 +604,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -697,6 +711,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -783,6 +800,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.schemas.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.schemas.html index 78dd8b092c2..4e6eb5c4e36 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.schemas.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.schemas.html @@ -124,6 +124,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -227,6 +230,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -271,6 +277,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], @@ -321,6 +330,9 @@

Method Details

"keyPropertyType": "A String", # Output only. Type of the key property that this field is mapped to. Empty string if this is not annotated as mapped to a key property. Example types are `title`, `description`. Full list is defined by `keyPropertyMapping` in the schema field annotation. If the schema field has a `KeyPropertyMapping` annotation, `indexable_option` and `searchable_option` of this field cannot be modified. "recsFilterableOption": "A String", # If recs_filterable_option is FILTERABLE_ENABLED, field values are filterable by filter expression in RecommendationService.Recommend. If FILTERABLE_ENABLED but the field type is numerical, field values are not filterable by text queries in RecommendationService.Recommend. Only textual fields are supported. If recs_filterable_option is unset, the default setting is FILTERABLE_DISABLED for fields that support setting filterable options. When a field set to [FILTERABLE_DISABLED] is filtered, a warning is generated and an empty result is returned. "retrievableOption": "A String", # If retrievable_option is RETRIEVABLE_ENABLED, field values are included in the search results. If retrievable_option is unset, the server behavior defaults to RETRIEVABLE_DISABLED for fields that support setting retrievable options. For those fields that do not support setting retrievable options, such as `object` and `boolean`, the server will skip retrievable option setting, and setting retrievable_option for those fields will throw `INVALID_ARGUMENT` error. + "schemaOrgPaths": [ # Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished + "A String", + ], "searchableOption": "A String", # If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error. }, ], diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.servingConfigs.html index ac5d2be448c..114b5beff32 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -170,6 +171,11 @@

Method Details

"customFineTuningSpec": { # Defines custom fine tuning spec. # Custom fine tuning configs. "enableSearchAdaptor": True or False, # Whether or not to enable and include custom fine tuned search adaptor model. }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -257,6 +263,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -270,6 +279,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -387,7 +399,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -494,7 +506,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -607,7 +619,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -703,7 +715,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -790,6 +802,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -802,6 +815,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -952,7 +966,7 @@

Method Details

"numPreviousSegments": 42, # Specifies whether to also include the adjacent from each selected segments. Return at most `num_previous_segments` segments before each selected segments. "returnExtractiveSegmentScore": True or False, # Specifies whether to return the confidence score from the extractive segments in each search result. This feature is available only for new or allowlisted data stores. To allowlist your data store, contact your Customer Engineer. The default value is `false`. }, - "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`. + "searchResultMode": "A String", # Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`. "snippetSpec": { # A specification for configuring snippets in a search response. # If `snippetSpec` is not specified, snippets are not included in the search response. "maxSnippetCount": 42, # [DEPRECATED] This field is deprecated. To control snippet return, use `return_snippet` field. For backwards compatibility, we will return snippet if max_snippet_count > 0. "referenceOnly": True or False, # [DEPRECATED] This field is deprecated and will have no affect on the snippet. @@ -1136,7 +1150,7 @@

Method Details

"pageEnd": 42, # The end page of the chunk. "pageStart": 42, # The start page of the chunk. }, - "relevanceScore": 3.14, # Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse + "relevanceScore": 3.14, # Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse. }, "document": { # Document captures all raw metadata information of items to be recommended or searched. # The document data snippet in the search response. Only fields that are marked as `retrievable` are populated. "aclInfo": { # ACL Information of the Document. # Access control information for the document. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.sessions.answers.html index f2d7060d2cf..3b0d819c96a 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.userEvents.html index 9c393461dcb..915cc8d973a 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.dataStores.userEvents.html @@ -87,7 +87,7 @@

Instance Methods

purge(parent, body=None, x__xgafv=None)

Deletes permanently all user events specified by the filter provided. Depending on the number of events specified by the filter, this operation could take hours or days to complete. To test a filter, use the list command first.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -172,6 +172,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -184,6 +185,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -306,7 +308,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -330,6 +332,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -342,6 +345,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -388,6 +392,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -412,6 +417,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -424,6 +430,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.rankingConfigs.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.rankingConfigs.html index f6edded6eb6..9e3abab27f0 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.rankingConfigs.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.rankingConfigs.html @@ -108,6 +108,9 @@

Method Details

}, ], "topN": 42, # The number of results to return. If this is unset or no bigger than zero, returns all results. + "userLabels": { # The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details. + "a_key": "A String", + }, } x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1alpha.projects.locations.userEvents.html b/docs/dyn/discoveryengine_v1alpha.projects.locations.userEvents.html index 690aece7d39..894ce6ca5e7 100644 --- a/docs/dyn/discoveryengine_v1alpha.projects.locations.userEvents.html +++ b/docs/dyn/discoveryengine_v1alpha.projects.locations.userEvents.html @@ -78,7 +78,7 @@

Instance Methods

close()

Close httplib2 connections.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -87,7 +87,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -111,6 +111,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -123,6 +124,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -169,6 +171,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -193,6 +196,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -205,6 +209,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.html b/docs/dyn/discoveryengine_v1beta.projects.html index 930ed20beda..4e54db345ab 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.html +++ b/docs/dyn/discoveryengine_v1beta.projects.html @@ -87,10 +87,56 @@

Instance Methods

close()

Close httplib2 connections.

+

+ provision(name, body=None, x__xgafv=None)

+

Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.

Method Details

close()
Close httplib2 connections.
+
+ provision(name, body=None, x__xgafv=None) +
Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.
+
+Args:
+  name: string, Required. Full resource name of a Project, such as `projects/{project_id_or_number}`. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request for ProjectService.ProvisionProject method.
+  "acceptDataUseTerms": True or False, # Required. Set to `true` to specify that caller has read and would like to give consent to the [Terms for data use](https://cloud.google.com/retail/data-use-terms).
+  "dataUseTermsVersion": "A String", # Required. The version of the [Terms for data use](https://cloud.google.com/retail/data-use-terms) that caller has read and would like to give consent to. Acceptable version is `2022-11-23`, and this may change over time.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.controls.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.controls.html new file mode 100644 index 00000000000..10bd5326b59 --- /dev/null +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.customModels.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.customModels.html index aeb15e1d133..bab5443933b 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.customModels.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.customModels.html @@ -106,7 +106,7 @@

Method Details

"createTime": "A String", # Timestamp the Model was created at. "displayName": "A String", # The display name of the model. "modelState": "A String", # The state that the model is in (e.g.`TRAINING` or `TRAINING_FAILED`). - "modelVersion": "A String", + "modelVersion": "A String", # The version of the model. "name": "A String", # Required. The fully qualified resource name of the model. Format: `projects/{project_number}/locations/{location}/collections/{collection}/dataStores/{data_store}/customTuningModels/{custom_tuning_model}` model must be an alpha-numerical string with limit of 40 characters. "trainingStartTime": "A String", # Timestamp the model training was initiated. }, diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.html index b5cb670ef33..edb6f3f3f1e 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -399,7 +404,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.servingConfigs.html index f805ebf706f..ef90311e505 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -167,6 +168,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -253,6 +259,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -266,6 +275,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -734,6 +746,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -746,6 +759,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.sessions.answers.html index 4634b970bed..81ae5963d08 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.userEvents.html index c65e30ee0ca..008b038718d 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.dataStores.userEvents.html @@ -84,7 +84,7 @@

Instance Methods

import_(parent, body=None, x__xgafv=None)

Bulk import of user events. Request processing might be synchronous. Events that already exist are skipped. Use this method for backfilling historical user events. Operation.response is of type ImportResponse. Note that it is possible for a subset of the items to be successfully inserted. Operation.metadata is of type ImportMetadata.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -169,6 +169,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -181,6 +182,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -260,7 +262,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -284,6 +286,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -296,6 +299,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -342,6 +346,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -366,6 +371,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -378,6 +384,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.controls.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.controls.html new file mode 100644 index 00000000000..8ed7fa4fb27 --- /dev/null +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . collections . engines . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.html index 4b7dc521343..de237054496 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.html @@ -74,6 +74,11 @@

Discovery Engine API . projects . locations . collections . engines

Instance Methods

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -153,7 +158,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -265,7 +270,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -318,7 +323,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -378,7 +383,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -420,7 +425,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -474,7 +479,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. @@ -528,7 +533,7 @@

Method Details

"dialogflowAgent": "A String", # The resource name of a Dialogflow agent, that this Chat Engine refers to. Format: `projects//locations//agents/`. }, "commonConfig": { # Common configurations for an Engine. # Common config spec that specifies the metadata of the engine. - "companyName": "A String", # Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. + "companyName": "A String", # The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features. }, "createTime": "A String", # Output only. Timestamp the Recommendation Engine was created at. "dataStoreIds": [ # The data stores associated with this engine. For SOLUTION_TYPE_SEARCH and SOLUTION_TYPE_RECOMMENDATION type of engines, they can only associate with at most one data store. If solution_type is SOLUTION_TYPE_CHAT, multiple DataStores in the same Collection can be associated here. Note that when used in CreateEngineRequest, one DataStore id must be provided as the system will use it for necessary initializations. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.servingConfigs.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.servingConfigs.html index 6c86eb031a2..ccae63b427e 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -167,6 +168,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -253,6 +259,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -266,6 +275,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -734,6 +746,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -746,6 +759,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.sessions.answers.html b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.sessions.answers.html index 26bd7f79cad..39a93d5ac52 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.collections.engines.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.controls.html b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.controls.html new file mode 100644 index 00000000000..f3ccc09fd10 --- /dev/null +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.controls.html @@ -0,0 +1,482 @@ + + + +

Discovery Engine API . projects . locations . dataStores . controls

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, body=None, controlId=None, x__xgafv=None)

+

Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.

+

+ delete(name, x__xgafv=None)

+

Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.

+

+ get(name, x__xgafv=None)

+

Gets a Control.

+

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all Controls by their parent DataStore.

+

+ list_next()

+

Retrieves the next page of results.

+

+ patch(name, body=None, updateMask=None, x__xgafv=None)

+

Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, body=None, controlId=None, x__xgafv=None) +
Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.
+
+Args:
+  parent: string, Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  controlId: string, Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets a Control.
+
+Args:
+  name: string, Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ +
+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all Controls by their parent DataStore.
+
+Args:
+  parent: string, Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}` (required)
+  filter: string, Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.
+  pageSize: integer, Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.
+  pageToken: string, Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for ListControls method.
+  "controls": [ # All the Controls for a given data store.
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+      "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+        "A String",
+      ],
+      "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+        "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+        "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+        { # Defines circumstances to be checked before allowing a behavior
+          "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+            { # Used for time-dependent conditions.
+              "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+              "startTime": "A String", # Start of time range. Range is inclusive.
+            },
+          ],
+          "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+            { # Matcher for search request query
+              "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+              "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+            },
+          ],
+        },
+      ],
+      "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+        "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+        "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+      "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+        "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+      },
+      "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+      "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+        "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+          "A String",
+        ],
+      },
+      "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+        "A String",
+      ],
+    },
+  ],
+  "nextPageToken": "A String", # Pagination token, if not returned indicates the last page.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ patch(name, body=None, updateMask=None, x__xgafv=None) +
Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.
+
+Args:
+  name: string, Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+  updateMask: string, Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.
+  "associatedServingConfigIds": [ # Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.
+    "A String",
+  ],
+  "boostAction": { # Adjusts order of products in returned list. # Defines a boost-type control
+    "boost": 3.14, # Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).
+    "dataStore": "A String", # Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "conditions": [ # Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.
+    { # Defines circumstances to be checked before allowing a behavior
+      "activeTimeRange": [ # Range of time(s) specifying when condition is active. Maximum of 10 time ranges.
+        { # Used for time-dependent conditions.
+          "endTime": "A String", # End of time range. Range is inclusive. Must be in the future.
+          "startTime": "A String", # Start of time range. Range is inclusive.
+        },
+      ],
+      "queryTerms": [ # Search only A list of terms to match the query on. Maximum of 10 query terms.
+        { # Matcher for search request query
+          "fullMatch": True or False, # Whether the search query needs to exactly match the query term.
+          "value": "A String", # The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.
+        },
+      ],
+    },
+  ],
+  "displayName": "A String", # Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  "filterAction": { # Specified which products may be included in results. Uses same filter as boost. # Defines a filter-type control Currently not supported by Recommendation
+    "dataStore": "A String", # Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store
+    "filter": "A String", # Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "name": "A String", # Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`
+  "redirectAction": { # Redirects a shopper to the provided URI. # Defines a redirect-type control.
+    "redirectUri": "A String", # Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.
+  },
+  "solutionType": "A String", # Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.
+  "synonymsAction": { # Creates a set of terms that will act as synonyms of one another. Example: "happy" will also be considered as "glad", "glad" will also be considered as "happy". # Treats a group of terms as synonyms of one another.
+    "synonyms": [ # Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.
+      "A String",
+    ],
+  },
+  "useCases": [ # Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.
+    "A String",
+  ],
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.html b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.html index d4daaa4e433..46ad7b62416 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.html @@ -79,6 +79,11 @@

Instance Methods

Returns the branches Resource.

+

+ controls() +

+

Returns the controls Resource.

+

conversations()

@@ -391,7 +396,7 @@

Method Details

Args: parent: string, Required. The parent branch resource name, such as `projects/{project}/locations/{location}/collections/{collection_id}`. If the caller does not have permission to list DataStores under this location, regardless of whether or not this data store exists, a PERMISSION_DENIED error is returned. (required) - filter: string, Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH' + filter: string, Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'` pageSize: integer, Maximum number of DataStores to return. If unspecified, defaults to 10. The maximum allowed value is 50. Values above 50 will be coerced to 50. If this field is negative, an INVALID_ARGUMENT is returned. pageToken: string, A page token ListDataStoresResponse.next_page_token, received from a previous DataStoreService.ListDataStores call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to DataStoreService.ListDataStores must match the call that provided the page token. Otherwise, an INVALID_ARGUMENT error is returned. x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.servingConfigs.html b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.servingConfigs.html index e338eaa5fe6..b8d7f5de439 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.servingConfigs.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.servingConfigs.html @@ -115,6 +115,7 @@

Method Details

"answerGenerationSpec": { # Answer generation specification. # Answer generation specification. "answerLanguageCode": "A String", # Language code for Answer. Use language tags defined by [BCP47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Note: This is an experimental feature. "ignoreAdversarialQuery": True or False, # Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead. + "ignoreLowRelevantContent": True or False, # Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service. "ignoreNonAnswerSeekingQuery": True or False, # Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead. "includeCitations": True or False, # Specifies whether to include citation metadata in the answer. The default value is `false`. "modelSpec": { # Answer Generation Model specification. # Answer generation model specification. @@ -167,6 +168,11 @@

Method Details

}, ], }, + "dataStoreSpecs": [ # Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level. + { # A struct to define data stores to filter on in a search call and configurations for those data stores. A maximum of 1 DataStoreSpec per data_store is allowed. Otherwise, an `INVALID_ARGUMENT` error is returned. + "dataStore": "A String", # Required. Full resource name of DataStore, such as `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. + }, + ], "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY("king kong")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata) "maxReturnResults": 42, # Number of search results to return. The default value is 10. "orderBy": "A String", # The order in which documents are returned. Documents can be ordered by a field in an Document object. Leave it unset if ordered by relevance. `order_by` expression is case-sensitive. For more information on ordering, see [Ordering](https://cloud.google.com/retail/docs/filter-and-order#order) If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. @@ -253,6 +259,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -266,6 +275,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -734,6 +746,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -746,6 +759,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.sessions.answers.html b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.sessions.answers.html index afddceba14b..7109695eeab 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.sessions.answers.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.sessions.answers.html @@ -135,6 +135,9 @@

Method Details

"documentMetadata": { # Document metadata. # Document metadata. "document": "A String", # Document resource name. "pageIdentifier": "A String", # Page identifier. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, @@ -148,6 +151,9 @@

Method Details

}, ], "document": "A String", # Document resource name. + "structData": { # The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result. + "a_key": "", # Properties of the object. + }, "title": "A String", # Title. "uri": "A String", # URI for the document. }, diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.userEvents.html b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.userEvents.html index eb5c27005c6..487937d2a62 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.userEvents.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.dataStores.userEvents.html @@ -84,7 +84,7 @@

Instance Methods

import_(parent, body=None, x__xgafv=None)

Bulk import of user events. Request processing might be synchronous. Events that already exist are skipped. Use this method for backfilling historical user events. Operation.response is of type ImportResponse. Note that it is possible for a subset of the items to be successfully inserted. Operation.metadata is of type ImportMetadata.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -169,6 +169,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -181,6 +182,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -260,7 +262,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -284,6 +286,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -296,6 +299,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -342,6 +346,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -366,6 +371,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -378,6 +384,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.rankingConfigs.html b/docs/dyn/discoveryengine_v1beta.projects.locations.rankingConfigs.html index 5a00e5ae088..e2a966e2e32 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.rankingConfigs.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.rankingConfigs.html @@ -108,6 +108,9 @@

Method Details

}, ], "topN": 42, # The number of results to return. If this is unset or no bigger than zero, returns all results. + "userLabels": { # The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details. + "a_key": "A String", + }, } x__xgafv: string, V1 error format. diff --git a/docs/dyn/discoveryengine_v1beta.projects.locations.userEvents.html b/docs/dyn/discoveryengine_v1beta.projects.locations.userEvents.html index 80b54f747ff..67ae804cb1e 100644 --- a/docs/dyn/discoveryengine_v1beta.projects.locations.userEvents.html +++ b/docs/dyn/discoveryengine_v1beta.projects.locations.userEvents.html @@ -78,7 +78,7 @@

Instance Methods

close()

Close httplib2 connections.

- write(parent, body=None, x__xgafv=None)

+ write(parent, body=None, writeAsync=None, x__xgafv=None)

Writes a single user event.

Method Details

@@ -87,7 +87,7 @@

Method Details

- write(parent, body=None, x__xgafv=None) + write(parent, body=None, writeAsync=None, x__xgafv=None)
Writes a single user event.
 
 Args:
@@ -111,6 +111,7 @@ 

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -123,6 +124,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. @@ -169,6 +171,7 @@

Method Details

"userPseudoId": "A String", # Required. A unique identifier for tracking visitors. For example, this could be implemented with an HTTP cookie, which should be able to uniquely identify a visitor on a single device. This unique identifier should not change if the visitor log in/out of the website. Do not set the field to the same fixed ID for different users. This mixes the event history of those users together, which results in degraded model quality. The field must be a UTF-8 encoded string with a length limit of 128 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. The field should not contain PII or user-data. We recommend to use Google Analytics [Client ID](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#clientId) for this field. } + writeAsync: boolean, If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -193,6 +196,7 @@

Method Details

"selectedPosition": 42, # End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0. "selectedSuggestion": "A String", # End user selected CompleteQueryResponse.QuerySuggestion.suggestion. }, + "dataStore": "A String", # The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted. "directUserRequest": True or False, # Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent. "documents": [ # List of Documents associated with this user event. This field is optional except for the following event types: * `view-item` * `add-to-cart` * `purchase` * `media-play` * `media-complete` In a `search` event, this field represents the documents returned to the end user on the current page (the end user may have not finished browsing the whole page yet). When a new page is returned to the end user, after pagination/filtering/ordering even for the same query, a new `search` event with different UserEvent.documents is desired. { # Detailed document information associated with a user event. @@ -205,6 +209,7 @@

Method Details

"uri": "A String", # The Document URI - only allowed for website data stores. }, ], + "engine": "A String", # The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search. "eventTime": "A String", # Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened. "eventType": "A String", # Required. User event type. Allowed values are: Generic values: * `search`: Search for Documents. * `view-item`: Detailed page view of a Document. * `view-item-list`: View of a panel or ordered list of Documents. * `view-home-page`: View of the home page. * `view-category-page`: View of a category page, e.g. Home > Men > Jeans Retail-related values: * `add-to-cart`: Add an item(s) to cart, e.g. in Retail online shopping * `purchase`: Purchase an item(s) Media-related values: * `media-play`: Start/resume watching a video, playing a song, etc. * `media-complete`: Finished or stopped midway through a video, song, etc. "filter": "A String", # The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. One example is for `search` events, the associated SearchRequest may contain a filter expression in SearchRequest.filter conforming to https://google.aip.dev/160#filtering. Similarly, for `view-item-list` events that are generated from a RecommendRequest, this field may be populated directly from RecommendRequest.filter conforming to https://google.aip.dev/160#filtering. The value must be a UTF-8 encoded string with a length limit of 1,000 characters. Otherwise, an `INVALID_ARGUMENT` error is returned. diff --git a/docs/dyn/displayvideo_v2.advertisers.creatives.html b/docs/dyn/displayvideo_v2.advertisers.creatives.html index df79ae18ffb..3ecf31487b2 100644 --- a/docs/dyn/displayvideo_v2.advertisers.creatives.html +++ b/docs/dyn/displayvideo_v2.advertisers.creatives.html @@ -209,7 +209,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -369,7 +369,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -556,7 +556,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -729,7 +729,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -909,7 +909,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -1070,7 +1070,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. diff --git a/docs/dyn/displayvideo_v3.advertisers.creatives.html b/docs/dyn/displayvideo_v3.advertisers.creatives.html index 088c918405f..65bfc0cf55c 100644 --- a/docs/dyn/displayvideo_v3.advertisers.creatives.html +++ b/docs/dyn/displayvideo_v3.advertisers.creatives.html @@ -209,7 +209,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -369,7 +369,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -556,7 +556,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -729,7 +729,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -909,7 +909,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. @@ -1070,7 +1070,7 @@

Method Details

"status": "A String", # Status of the exchange review. }, ], - "publisherReviewStatuses": [ # Publisher review statuses for the creative. + "publisherReviewStatuses": [ # Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information. { # Publisher review status for the creative. "publisherName": "A String", # The publisher reviewing the creative. "status": "A String", # Status of the publisher review. diff --git a/docs/dyn/documentai_v1.projects.locations.processors.html b/docs/dyn/documentai_v1.projects.locations.processors.html index ff629743aba..6bd1ed1e8b0 100644 --- a/docs/dyn/documentai_v1.projects.locations.processors.html +++ b/docs/dyn/documentai_v1.projects.locations.processors.html @@ -162,6 +162,12 @@

Method Details

42, ], }, + "layoutConfig": { # Serving config for layout parser processor. # Optional. Only applicable to `LAYOUT_PARSER_PROCESSOR`. Returns error if set on other processor types. + "chunkingConfig": { # Serving config for chunking. # Optional. Config for chunking in layout parser processor. + "chunkSize": 42, # Optional. The chunk sizes to use when splitting documents, in order of level. + "includeAncestorHeadings": True or False, # Optional. Whether or not to include ancestor headings when splitting. + }, + }, "ocrConfig": { # Config for Document OCR. # Only applicable to `OCR_PROCESSOR` and `FORM_PARSER_PROCESSOR`. Returns error if set on other processor types. "advancedOcrOptions": [ # A list of advanced OCR options to further fine-tune OCR behavior. Current valid values are: - `legacy_layout`: a heuristics layout detection algorithm, which serves as an alternative to the current ML-based layout detection algorithm. Customers can choose the best suitable layout algorithm based on their situation. "A String", @@ -274,6 +280,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. } @@ -299,6 +307,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }
@@ -448,6 +458,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }
@@ -485,6 +497,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }, @@ -522,7 +536,97 @@

Method Details

"mimeType": "A String", # An IANA MIME type (RFC6838) of the content. }, "inlineDocument": { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. # An inline document proto. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. @@ -1378,6 +1482,12 @@

Method Details

42, ], }, + "layoutConfig": { # Serving config for layout parser processor. # Optional. Only applicable to `LAYOUT_PARSER_PROCESSOR`. Returns error if set on other processor types. + "chunkingConfig": { # Serving config for chunking. # Optional. Config for chunking in layout parser processor. + "chunkSize": 42, # Optional. The chunk sizes to use when splitting documents, in order of level. + "includeAncestorHeadings": True or False, # Optional. Whether or not to include ancestor headings when splitting. + }, + }, "ocrConfig": { # Config for Document OCR. # Only applicable to `OCR_PROCESSOR` and `FORM_PARSER_PROCESSOR`. Returns error if set on other processor types. "advancedOcrOptions": [ # A list of advanced OCR options to further fine-tune OCR behavior. Current valid values are: - `legacy_layout`: a heuristics layout detection algorithm, which serves as an alternative to the current ML-based layout detection algorithm. Customers can choose the best suitable layout algorithm based on their situation. "A String", @@ -1449,7 +1559,97 @@

Method Details

{ # Response message for the ProcessDocument method. "document": { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. # The document payload, will populate fields based on the processor's behavior. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. diff --git a/docs/dyn/documentai_v1.projects.locations.processors.humanReviewConfig.html b/docs/dyn/documentai_v1.projects.locations.processors.humanReviewConfig.html index 677a2772c6e..f6177a0068b 100644 --- a/docs/dyn/documentai_v1.projects.locations.processors.humanReviewConfig.html +++ b/docs/dyn/documentai_v1.projects.locations.processors.humanReviewConfig.html @@ -130,7 +130,97 @@

Method Details

}, "enableSchemaValidation": True or False, # Whether the validation should be performed on the ad-hoc review request. "inlineDocument": { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. # An inline document proto. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. diff --git a/docs/dyn/documentai_v1.projects.locations.processors.processorVersions.html b/docs/dyn/documentai_v1.projects.locations.processors.processorVersions.html index d3c6eaf050c..beeaefc6202 100644 --- a/docs/dyn/documentai_v1.projects.locations.processors.processorVersions.html +++ b/docs/dyn/documentai_v1.projects.locations.processors.processorVersions.html @@ -157,6 +157,12 @@

Method Details

42, ], }, + "layoutConfig": { # Serving config for layout parser processor. # Optional. Only applicable to `LAYOUT_PARSER_PROCESSOR`. Returns error if set on other processor types. + "chunkingConfig": { # Serving config for chunking. # Optional. Config for chunking in layout parser processor. + "chunkSize": 42, # Optional. The chunk sizes to use when splitting documents, in order of level. + "includeAncestorHeadings": True or False, # Optional. Whether or not to include ancestor headings when splitting. + }, + }, "ocrConfig": { # Config for Document OCR. # Only applicable to `OCR_PROCESSOR` and `FORM_PARSER_PROCESSOR`. Returns error if set on other processor types. "advancedOcrOptions": [ # A list of advanced OCR options to further fine-tune OCR behavior. Current valid values are: - `legacy_layout`: a heuristics layout detection algorithm, which serves as an alternative to the current ML-based layout detection algorithm. Customers can choose the best suitable layout algorithm based on their situation. "A String", @@ -465,6 +471,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }
@@ -562,6 +570,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }, ], @@ -598,7 +608,97 @@

Method Details

"mimeType": "A String", # An IANA MIME type (RFC6838) of the content. }, "inlineDocument": { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. # An inline document proto. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. @@ -1454,6 +1554,12 @@

Method Details

42, ], }, + "layoutConfig": { # Serving config for layout parser processor. # Optional. Only applicable to `LAYOUT_PARSER_PROCESSOR`. Returns error if set on other processor types. + "chunkingConfig": { # Serving config for chunking. # Optional. Config for chunking in layout parser processor. + "chunkSize": 42, # Optional. The chunk sizes to use when splitting documents, in order of level. + "includeAncestorHeadings": True or False, # Optional. Whether or not to include ancestor headings when splitting. + }, + }, "ocrConfig": { # Config for Document OCR. # Only applicable to `OCR_PROCESSOR` and `FORM_PARSER_PROCESSOR`. Returns error if set on other processor types. "advancedOcrOptions": [ # A list of advanced OCR options to further fine-tune OCR behavior. Current valid values are: - `legacy_layout`: a heuristics layout detection algorithm, which serves as an alternative to the current ML-based layout detection algorithm. Customers can choose the best suitable layout algorithm based on their situation. "A String", @@ -1525,7 +1631,97 @@

Method Details

{ # Response message for the ProcessDocument method. "document": { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. # The document payload, will populate fields based on the processor's behavior. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. @@ -2530,6 +2726,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }, } diff --git a/docs/dyn/documentai_v1beta2.projects.documents.html b/docs/dyn/documentai_v1beta2.projects.documents.html index 578fcf830f6..fec41c2601e 100644 --- a/docs/dyn/documentai_v1beta2.projects.documents.html +++ b/docs/dyn/documentai_v1beta2.projects.documents.html @@ -285,7 +285,97 @@

Method Details

An object of the form: { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. diff --git a/docs/dyn/documentai_v1beta2.projects.locations.documents.html b/docs/dyn/documentai_v1beta2.projects.locations.documents.html index 24d3dc262ba..f75e720fbda 100644 --- a/docs/dyn/documentai_v1beta2.projects.locations.documents.html +++ b/docs/dyn/documentai_v1beta2.projects.locations.documents.html @@ -285,7 +285,97 @@

Method Details

An object of the form: { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality. + "chunkedDocument": { # Represents the chunks that the document is divided into. # Document chunked based on chunking config. + "chunks": [ # List of chunks. + { # Represents a chunk. + "chunkId": "A String", # ID of the chunk. + "content": "A String", # Text content of the chunk. + "pageFooters": [ # Page footers associated with the chunk. + { # Represents the page footer associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the footer. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Footer in text format. + }, + ], + "pageHeaders": [ # Page headers associated with the chunk. + { # Represents the page header associated with the chunk. + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the header. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "text": "A String", # Header in text format. + }, + ], + "pageSpan": { # Represents where the chunk starts and ends in the document. # Page span of the chunk. + "pageEnd": 42, # Page where chunk ends in the document. + "pageStart": 42, # Page where chunk starts in the document. + }, + "sourceBlockIds": [ # Unused. + "A String", + ], + }, + ], + }, "content": "A String", # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64. + "documentLayout": { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document. + "blocks": [ # List of blocks in the document. + { # Represents a block. A block could be one of the various types (text, table, list) supported. + "blockId": "A String", # ID of the block. + "listBlock": { # Represents a list type block. # Block consisting of list content/structure. + "listEntries": [ # List entries that constitute a list block. + { # Represents an entry in the list. + "blocks": [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + }, + ], + "type": "A String", # Type of the list_entries (if exist). Available options are `ordered` and `unordered`. + }, + "pageSpan": { # Represents where the block starts and ends in the document. # Page span of the block. + "pageEnd": 42, # Page where block ends in the document. + "pageStart": 42, # Page where block starts in the document. + }, + "tableBlock": { # Represents a table type block. # Block consisting of table content/structure. + "bodyRows": [ # Body rows containing main table content. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + "caption": "A String", # Table caption/title. + "headerRows": [ # Header rows at the top of the table. + { # Represents a row in a table. + "cells": [ # A table row is a list of table cells. + { # Represents a cell in a table row. + "blocks": [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "colSpan": 42, # How many columns this cell spans. + "rowSpan": 42, # How many rows this cell spans. + }, + ], + }, + ], + }, + "textBlock": { # Represents a text type block. # Block consisting of text content. + "blocks": [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks. + # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock + ], + "text": "A String", # Text content stored in the block. + "type": "A String", # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`. + }, + }, + ], + }, "entities": [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries. { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location. "confidence": 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`. diff --git a/docs/dyn/documentai_v1beta3.projects.locations.processors.dataset.html b/docs/dyn/documentai_v1beta3.projects.locations.processors.dataset.html index 9754b56420e..de992616da3 100644 --- a/docs/dyn/documentai_v1beta3.projects.locations.processors.dataset.html +++ b/docs/dyn/documentai_v1beta3.projects.locations.processors.dataset.html @@ -287,7 +287,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, diff --git a/docs/dyn/documentai_v1beta3.projects.locations.processors.html b/docs/dyn/documentai_v1beta3.projects.locations.processors.html index 65a6ba8f621..ec6aebdabe5 100644 --- a/docs/dyn/documentai_v1beta3.projects.locations.processors.html +++ b/docs/dyn/documentai_v1beta3.projects.locations.processors.html @@ -311,6 +311,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. } @@ -336,6 +338,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }
@@ -485,6 +489,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }
@@ -522,6 +528,8 @@

Method Details

"processorVersion": "A String", # The resource name of aliased processor version. }, ], + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor. "type": "A String", # The processor type, such as: `OCR_PROCESSOR`, `INVOICE_PROCESSOR`. To get a list of processor types, see FetchProcessorTypes. }, @@ -581,7 +589,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -1522,7 +1530,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -2559,7 +2567,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -3538,6 +3546,8 @@

Method Details

}, }, "name": "A String", # Dataset resource name. Format: `projects/{project}/locations/{location}/processors/{processor}/dataset` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "spannerIndexingConfig": { # Configuration specific to spanner-based indexing. # Optional. A lightweight indexing source with low latency and high reliability, but lacking advanced features like CMEK and content-based search. }, "state": "A String", # Required. State of the dataset. Ignored when updating dataset. diff --git a/docs/dyn/documentai_v1beta3.projects.locations.processors.humanReviewConfig.html b/docs/dyn/documentai_v1beta3.projects.locations.processors.humanReviewConfig.html index 4692c55b026..bc196cb5486 100644 --- a/docs/dyn/documentai_v1beta3.projects.locations.processors.humanReviewConfig.html +++ b/docs/dyn/documentai_v1beta3.projects.locations.processors.humanReviewConfig.html @@ -124,7 +124,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -1105,7 +1105,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, diff --git a/docs/dyn/documentai_v1beta3.projects.locations.processors.processorVersions.html b/docs/dyn/documentai_v1beta3.projects.locations.processors.processorVersions.html index 199eea3c613..398b3c8d3df 100644 --- a/docs/dyn/documentai_v1beta3.projects.locations.processors.processorVersions.html +++ b/docs/dyn/documentai_v1beta3.projects.locations.processors.processorVersions.html @@ -509,6 +509,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }
@@ -664,6 +666,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }, ], @@ -722,7 +726,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -1663,7 +1667,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -2700,7 +2704,7 @@

Method Details

"pageEnd": 42, # Page where chunk ends in the document. "pageStart": 42, # Page where chunk starts in the document. }, - "sourceBlockIds": [ # DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk. + "sourceBlockIds": [ # Unused. "A String", ], }, @@ -3793,6 +3797,8 @@

Method Details

}, "modelType": "A String", # Output only. The model type of this processor version. "name": "A String", # Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}` + "satisfiesPzi": True or False, # Output only. Reserved for future use. + "satisfiesPzs": True or False, # Output only. Reserved for future use. "state": "A String", # Output only. The state of the processor version. }, } diff --git a/docs/dyn/fcmdata_v1beta1.projects.androidApps.deliveryData.html b/docs/dyn/fcmdata_v1beta1.projects.androidApps.deliveryData.html index c0211d601c2..7273f88af6b 100644 --- a/docs/dyn/fcmdata_v1beta1.projects.androidApps.deliveryData.html +++ b/docs/dyn/fcmdata_v1beta1.projects.androidApps.deliveryData.html @@ -124,10 +124,12 @@

Method Details

"priorityLowered": 3.14, # The percentage of accepted messages that had their priority lowered from high to normal. See [documentation for setting message priority](https://firebase.google.com/docs/cloud-messaging/android/message-priority). }, "messageOutcomePercents": { # Percentage breakdown of message delivery outcomes. These categories are mutually exclusive. All percentages are calculated with countMessagesAccepted as the denominator. These categories may not account for all message outcomes. # Mutually exclusive breakdown of message delivery outcomes. + "collapsed": 3.14, # The percentage of accepted messages that were [collapsed](https://firebase.google.com/docs/cloud-messaging/concept-options#collapsible_and_non-collapsible_messages) by another message. "delivered": 3.14, # The percentage of all accepted messages that were successfully delivered to the device. "droppedAppForceStopped": 3.14, # The percentage of accepted messages that were dropped because the application was force stopped on the device at the time of delivery and retries were unsuccessful. "droppedDeviceInactive": 3.14, # The percentage of accepted messages that were dropped because the target device is inactive. FCM will drop messages if the target device is deemed inactive by our servers. If a device does reconnect, we call [OnDeletedMessages()](https://firebase.google.com/docs/cloud-messaging/android/receive#override-ondeletedmessages) in our SDK instead of delivering the messages. "droppedTooManyPendingMessages": 3.14, # The percentage of accepted messages that were dropped due to [too many undelivered non-collapsible messages](https://firebase.google.com/docs/cloud-messaging/concept-options#collapsible_and_non-collapsible_messages). Specifically, each app instance can only have 100 pending messages stored on our servers for a device which is disconnected. When that device reconnects, those messages are delivered. When there are more than the maximum pending messages, we call [OnDeletedMessages()](https://firebase.google.com/docs/cloud-messaging/android/receive#override-ondeletedmessages) in our SDK instead of delivering the messages. + "droppedTtlExpired": 3.14, # The percentage of accepted messages that expired because [Time To Live (TTL)](https://firebase.google.com/docs/cloud-messaging/concept-options#ttl) elapsed before the target device reconnected. "pending": 3.14, # The percentage of messages accepted on this day that were not dropped and not delivered, due to the device being disconnected (as of the end of the America/Los_Angeles day when the message was sent to FCM). A portion of these messages will be delivered the next day when the device connects but others may be destined to devices that ultimately never reconnect. }, "proxyNotificationInsightPercents": { # Additional information about proxy notification delivery. All percentages are calculated with countNotificationsAccepted as the denominator. # Additional insights about proxy notification delivery. diff --git a/docs/dyn/firebaseappcheck_v1.html b/docs/dyn/firebaseappcheck_v1.html index 09cab612fa2..f78b97394ff 100644 --- a/docs/dyn/firebaseappcheck_v1.html +++ b/docs/dyn/firebaseappcheck_v1.html @@ -79,6 +79,11 @@

Instance Methods

Returns the jwks Resource.

+

+ oauthClients() +

+

Returns the oauthClients Resource.

+

projects()

diff --git a/docs/dyn/firebaseappcheck_v1.oauthClients.html b/docs/dyn/firebaseappcheck_v1.oauthClients.html new file mode 100644 index 00000000000..518793277f6 --- /dev/null +++ b/docs/dyn/firebaseappcheck_v1.oauthClients.html @@ -0,0 +1,215 @@ + + + +

Firebase App Check API . oauthClients

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ exchangeAppAttestAssertion(app, body=None, x__xgafv=None)

+

Accepts an App Attest assertion and an artifact previously obtained from ExchangeAppAttestAttestation and verifies those with Apple. If valid, returns an AppCheckToken.

+

+ exchangeAppAttestAttestation(app, body=None, x__xgafv=None)

+

Accepts an App Attest CBOR attestation and verifies it with Apple using your preconfigured team and bundle IDs. If valid, returns an attestation artifact that can later be exchanged for an AppCheckToken using ExchangeAppAttestAssertion. For convenience and performance, this method's response object will also contain an AppCheckToken (if the verification is successful).

+

+ exchangeDebugToken(app, body=None, x__xgafv=None)

+

Validates a debug token secret that you have previously created using CreateDebugToken. If valid, returns an AppCheckToken. Note that a restrictive quota is enforced on this method to prevent accidental exposure of the app to abuse.

+

+ generateAppAttestChallenge(app, body=None, x__xgafv=None)

+

Generates a challenge that protects the integrity of an immediately following call to ExchangeAppAttestAttestation or ExchangeAppAttestAssertion. A challenge should not be reused for multiple calls.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ exchangeAppAttestAssertion(app, body=None, x__xgafv=None) +
Accepts an App Attest assertion and an artifact previously obtained from ExchangeAppAttestAttestation and verifies those with Apple. If valid, returns an AppCheckToken.
+
+Args:
+  app: string, Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for the ExchangeAppAttestAssertion method.
+  "artifact": "A String", # Required. The artifact returned by a previous call to ExchangeAppAttestAttestation.
+  "assertion": "A String", # Required. The CBOR-encoded assertion returned by the client-side App Attest API.
+  "challenge": "A String", # Required. A one-time challenge returned by an immediately prior call to GenerateAppAttestChallenge.
+  "limitedUse": True or False, # Specifies whether this attestation is for use in a *limited use* (`true`) or *session based* (`false`) context. To enable this attestation to be used with the *replay protection* feature, set this to `true`. The default value is `false`.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Encapsulates an *App Check token*, which are used to access Firebase services protected by App Check.
+  "token": "A String", # The App Check token. App Check tokens are signed [JWTs](https://tools.ietf.org/html/rfc7519) containing claims that identify the attested app and Firebase project. This token is used to access Firebase services protected by App Check. These tokens can also be [verified by your own custom backends](https://firebase.google.com/docs/app-check/custom-resource-backend) using the Firebase Admin SDK.
+  "ttl": "A String", # The duration from the time this token is minted until its expiration. This field is intended to ease client-side token management, since the client may have clock skew, but is still able to accurately measure a duration.
+}
+
+ +
+ exchangeAppAttestAttestation(app, body=None, x__xgafv=None) +
Accepts an App Attest CBOR attestation and verifies it with Apple using your preconfigured team and bundle IDs. If valid, returns an attestation artifact that can later be exchanged for an AppCheckToken using ExchangeAppAttestAssertion. For convenience and performance, this method's response object will also contain an AppCheckToken (if the verification is successful).
+
+Args:
+  app: string, Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for the ExchangeAppAttestAttestation method.
+  "attestationStatement": "A String", # Required. The App Attest statement returned by the client-side App Attest API. This is a base64url encoded CBOR object in the JSON response.
+  "challenge": "A String", # Required. A one-time challenge returned by an immediately prior call to GenerateAppAttestChallenge.
+  "keyId": "A String", # Required. The key ID generated by App Attest for the client app.
+  "limitedUse": True or False, # Specifies whether this attestation is for use in a *limited use* (`true`) or *session based* (`false`) context. To enable this attestation to be used with the *replay protection* feature, set this to `true`. The default value is `false`.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for the ExchangeAppAttestAttestation method.
+  "appCheckToken": { # Encapsulates an *App Check token*, which are used to access Firebase services protected by App Check. # Encapsulates an App Check token.
+    "token": "A String", # The App Check token. App Check tokens are signed [JWTs](https://tools.ietf.org/html/rfc7519) containing claims that identify the attested app and Firebase project. This token is used to access Firebase services protected by App Check. These tokens can also be [verified by your own custom backends](https://firebase.google.com/docs/app-check/custom-resource-backend) using the Firebase Admin SDK.
+    "ttl": "A String", # The duration from the time this token is minted until its expiration. This field is intended to ease client-side token management, since the client may have clock skew, but is still able to accurately measure a duration.
+  },
+  "artifact": "A String", # An artifact that can be used in future calls to ExchangeAppAttestAssertion.
+}
+
+ +
+ exchangeDebugToken(app, body=None, x__xgafv=None) +
Validates a debug token secret that you have previously created using CreateDebugToken. If valid, returns an AppCheckToken. Note that a restrictive quota is enforced on this method to prevent accidental exposure of the app to abuse.
+
+Args:
+  app: string, Required. The relative resource name of the app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for the ExchangeDebugToken method.
+  "debugToken": "A String", # Required. A debug token secret. This string must match a debug token secret previously created using CreateDebugToken.
+  "limitedUse": True or False, # Specifies whether this attestation is for use in a *limited use* (`true`) or *session based* (`false`) context. To enable this attestation to be used with the *replay protection* feature, set this to `true`. The default value is `false`.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Encapsulates an *App Check token*, which are used to access Firebase services protected by App Check.
+  "token": "A String", # The App Check token. App Check tokens are signed [JWTs](https://tools.ietf.org/html/rfc7519) containing claims that identify the attested app and Firebase project. This token is used to access Firebase services protected by App Check. These tokens can also be [verified by your own custom backends](https://firebase.google.com/docs/app-check/custom-resource-backend) using the Firebase Admin SDK.
+  "ttl": "A String", # The duration from the time this token is minted until its expiration. This field is intended to ease client-side token management, since the client may have clock skew, but is still able to accurately measure a duration.
+}
+
+ +
+ generateAppAttestChallenge(app, body=None, x__xgafv=None) +
Generates a challenge that protects the integrity of an immediately following call to ExchangeAppAttestAttestation or ExchangeAppAttestAssertion. A challenge should not be reused for multiple calls.
+
+Args:
+  app: string, Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for the GenerateAppAttestChallenge method.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for the GenerateAppAttestChallenge method.
+  "challenge": "A String", # A one-time use challenge for the client to pass to the App Attest API.
+  "ttl": "A String", # The duration from the time this challenge is minted until its expiration. This field is intended to ease client-side token management, since the client may have clock skew, but is still able to accurately measure a duration.
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/firebaseappcheck_v1.projects.apps.recaptchaV3Config.html b/docs/dyn/firebaseappcheck_v1.projects.apps.recaptchaV3Config.html index ab0732da8ae..b6067ef7769 100644 --- a/docs/dyn/firebaseappcheck_v1.projects.apps.recaptchaV3Config.html +++ b/docs/dyn/firebaseappcheck_v1.projects.apps.recaptchaV3Config.html @@ -85,7 +85,7 @@

Instance Methods

Gets the RecaptchaV3Config for the specified app. For security reasons, the `site_secret` field is never populated in the response.

patch(name, body=None, updateMask=None, x__xgafv=None)

-

Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.

+

Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA V3 tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.

Method Details

batchGet(parent, names=None, x__xgafv=None) @@ -143,7 +143,7 @@

Method Details

patch(name, body=None, updateMask=None, x__xgafv=None) -
Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.
+  
Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA V3 tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.
 
 Args:
   name: string, Required. The relative resource name of the reCAPTCHA v3 configuration object, in the format: ``` projects/{project_number}/apps/{app_id}/recaptchaV3Config ``` (required)
diff --git a/docs/dyn/firebaseappcheck_v1beta.projects.services.html b/docs/dyn/firebaseappcheck_v1beta.projects.services.html
index 435bb961934..e0e3f483ea2 100644
--- a/docs/dyn/firebaseappcheck_v1beta.projects.services.html
+++ b/docs/dyn/firebaseappcheck_v1beta.projects.services.html
@@ -110,7 +110,7 @@ 

Method Details

{ # Request message for the BatchUpdateServices method. "requests": [ # Required. The request messages specifying the Services to update. A maximum of 100 objects can be updated in a batch. { # Request message for the UpdateService method as well as an individual update message for the BatchUpdateServices method. - "service": { # The enforcement configuration for a Firebase service supported by App Check. # Required. The Service to update. The Service's `name` field is used to identify the Service to be updated, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) For Firebase Authentication to work with App Check, you must first upgrade to [Firebase Authentication with Identity Platform](https://firebase.google.com/docs/auth#identity-platform). + "service": { # The enforcement configuration for a Firebase service supported by App Check. # Required. The Service to update. The Service's `name` field is used to identify the Service to be updated, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) * `oauth2.googleapis.com` (Google Identity for iOS) For Firebase Authentication to work with App Check, you must first upgrade to [Firebase Authentication with Identity Platform](https://firebase.google.com/docs/auth#identity-platform). "enforcementMode": "A String", # Required. The App Check enforcement mode for this service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. "name": "A String", # Required. The relative resource name of the service configuration object, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) @@ -119,7 +119,7 @@

Method Details

"updateMask": "A String", # Required. A comma-separated list of names of fields in the Service to update. Example: `enforcement_mode`. }, ], - "updateMask": "A String", # Optional. A comma-separated list of names of fields in the Services to update. Example: `display_name`. If this field is present, the `update_mask` field in the UpdateServiceRequest messages must all match this field, or the entire batch fails and no updates will be committed. + "updateMask": "A String", # Optional. A comma-separated list of names of fields in the Services to update. Example: `display_name`. If the `update_mask` field is set in both this request and any of the UpdateServiceRequest messages, they must match or the entire batch fails and no updates will be committed. } x__xgafv: string, V1 error format. @@ -152,7 +152,7 @@

Method Details

Gets the Service configuration for the specified service name.
 
 Args:
-  name: string, Required. The relative resource name of the Service to retrieve, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) (required)
+  name: string, Required. The relative resource name of the Service to retrieve, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) * `oauth2.googleapis.com` (Google Identity for iOS) (required)
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -175,7 +175,7 @@ 

Method Details

Args: parent: string, Required. The relative resource name of the parent project for which to list each associated Service, in the format: ``` projects/{project_number} ``` (required) - pageSize: integer, The maximum number of Services to return in the response. Only explicitly configured services are returned. The server may return fewer than this at its own discretion. If no value is specified or set to zero (or too large a value is specified), the server will impose its own limit. + pageSize: integer, The maximum number of Services to return in the response. Only explicitly configured services are returned. The server may return fewer than this at its own discretion. If no value is specified (or too large a value is specified), the server will impose its own limit. pageToken: string, Token returned from a previous call to ListServices indicating where in the set of Services to resume listing. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ListServices must match the call that provided the page token; if they do not match, the result is undefined. x__xgafv: string, V1 error format. Allowed values diff --git a/docs/dyn/firebaseappcheck_v1beta.projects.services.resourcePolicies.html b/docs/dyn/firebaseappcheck_v1beta.projects.services.resourcePolicies.html index 24af9ccfc64..34d404b2bf9 100644 --- a/docs/dyn/firebaseappcheck_v1beta.projects.services.resourcePolicies.html +++ b/docs/dyn/firebaseappcheck_v1beta.projects.services.resourcePolicies.html @@ -114,7 +114,7 @@

Method Details

"resourcePolicy": { # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. # Required. The ResourcePolicy to update. The ResourcePolicy's `name` field is used to identify the ResourcePolicy to be updated, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }, @@ -137,7 +137,7 @@

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }, @@ -162,7 +162,7 @@

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. } @@ -178,7 +178,7 @@

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }
@@ -220,7 +220,7 @@

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }
@@ -249,7 +249,7 @@

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }, @@ -276,14 +276,14 @@

Method Details

Updates the specified ResourcePolicy configuration.
 
 Args:
-  name: string, Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. (required)
+  name: string, Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. (required)
   body: object, The request body.
     The object takes the form of:
 
 { # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration.
   "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service.
   "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232.
-  "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.
+  "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.
   "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created.
   "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated.
 }
@@ -300,7 +300,7 @@ 

Method Details

{ # App Check enforcement policy for a specific resource of a Firebase service supported by App Check. Note that this policy will override the service-level configuration. "enforcementMode": "A String", # Required. The App Check enforcement mode for this resource. This will override the EnforcementMode setting on the parent service. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. This etag is strongly validated as defined by RFC 7232. - "name": "A String", # Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. + "name": "A String", # Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID. "targetResource": "A String", # Required. Service specific name of the resource object to which this policy applies, in the format: * `//oauth2.googleapis.com/projects/{project_number}/oauthClients/{oauth_client_id}` (Google Identity for iOS) Note that the resource must belong to the service specified in the `name` and be from the same project as this policy, but the resource is allowed to be missing at the time of creation of this policy; in that case, we make a best-effort attempt at respecting this policy, but it may not have any effect until the resource is fully created. "updateTime": "A String", # Output only. Timestamp when this resource policy configuration object was most recently updated. }
diff --git a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html index 5171430d13c..1658c43b7ea 100644 --- a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html +++ b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html @@ -1260,19 +1260,6 @@

Method Details

{ # Request to export the history of resources. "_since": "A String", # If provided, only resources versions updated after this time are exported. The time uses the format YYYY-MM-DDThh:mm:ss.sss+zz:zz. For example, `2015-02-07T13:28:17.239+02:00` or `2017-01-01T00:00:00Z`. The time must be specified to the second and include a time zone. "_type": "A String", # String of comma-delimited FHIR resource types. If provided, only resources of the specified resource type(s) are exported. - "bigqueryDestination": { # The configuration for exporting to BigQuery. # The BigQuery output destination. The Cloud Healthcare Service Agent requires two IAM roles on the BigQuery location: `roles/bigquery.dataEditor` and `roles/bigquery.jobUser`. The output is one BigQuery table per resource type. Unlike when setting `BigQueryDestination` for `StreamConfig`, `ExportResources` does not create BigQuery views. - "datasetUri": "A String", # BigQuery URI to an existing dataset, up to 2000 characters long, in the format `bq://projectId.bqDatasetId`. - "force": True or False, # Use `write_disposition` instead. If `write_disposition` is specified, this parameter is ignored. force=false is equivalent to write_disposition=WRITE_EMPTY and force=true is equivalent to write_disposition=WRITE_TRUNCATE. - "schemaConfig": { # Configuration for the FHIR BigQuery schema. Determines how the server generates the schema. # The configuration for the exported BigQuery schema. - "lastUpdatedPartitionConfig": { # Configuration for FHIR BigQuery time-partitioned tables. # The configuration for exported BigQuery tables to be partitioned by FHIR resource's last updated time column. - "expirationMs": "A String", # Number of milliseconds for which to keep the storage for a partition. - "type": "A String", # Type of partitioning. - }, - "recursiveStructureDepth": "A String", # The depth for all recursive structures in the output analytics schema. For example, `concept` in the CodeSystem resource is a recursive structure; when the depth is 2, the CodeSystem table will have a column called `concept.concept` but not `concept.concept.concept`. If not specified or set to 0, the server will use the default value 2. The maximum depth allowed is 5. - "schemaType": "A String", # Specifies the output schema type. Schema type is required. - }, - "writeDisposition": "A String", # Determines if existing data in the destination dataset is overwritten, appended to, or not written if the tables contain data. If a write_disposition is specified, the `force` parameter is ignored. - }, "gcsDestination": { # The configuration for exporting to Cloud Storage. # The Cloud Storage output destination. The Healthcare Service Agent account requires the `roles/storage.objectAdmin` role on the Cloud Storage location. The exported outputs are organized by FHIR resource types. The server creates one or more objects per resource type depending on the volume of the resources exported. When there is only one object per resource type, the object name is in the form of `{operation_id})_history_{resource_type}`. When there are multiple objects for a given resource type, the object names are in the form of `{operation_id}_history_{resource_type}-{index}-of-{total}`. Each object contains newline delimited JSON, and each line is a FHIR history bundle containing the history for a single resource. "uriPrefix": "A String", # URI for a Cloud Storage directory where result files should be written (in the format `gs://{bucket-id}/{path/to/destination/dir}`). If there is no trailing slash, the service appends one when composing the object path. The Cloud Storage bucket referenced in `uri_prefix` must exist or an error occurs. }, diff --git a/docs/dyn/iam_v1.projects.locations.oauthClients.html b/docs/dyn/iam_v1.projects.locations.oauthClients.html index 23d8b689562..326384df897 100644 --- a/docs/dyn/iam_v1.projects.locations.oauthClients.html +++ b/docs/dyn/iam_v1.projects.locations.oauthClients.html @@ -118,14 +118,14 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. +{ # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -147,14 +147,14 @@

Method Details

Returns: An object of the form: - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -182,14 +182,14 @@

Method Details

Returns: An object of the form: - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -217,14 +217,14 @@

Method Details

Returns: An object of the form: - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -258,14 +258,14 @@

Method Details

{ # Response message for ListOauthClients. "nextPageToken": "A String", # Optional. A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages. "oauthClients": [ # A list of OauthClients. - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -304,14 +304,14 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. +{ # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -333,14 +333,14 @@

Method Details

Returns: An object of the form: - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. @@ -374,14 +374,14 @@

Method Details

Returns: An object of the form: - { # Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform. + { # Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud. "allowedGrantTypes": [ # Required. The list of OAuth grant types is allowed for the OauthClient. "A String", ], "allowedRedirectUris": [ # Required. The list of redirect uris that is allowed to redirect back when authorization process is completed. "A String", ], - "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address. + "allowedScopes": [ # Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. "A String", ], "clientId": "A String", # Output only. The system-generated OauthClient id. diff --git a/docs/dyn/iap_v1.v1.html b/docs/dyn/iap_v1.v1.html index bb99c04caaa..b831e33963a 100644 --- a/docs/dyn/iap_v1.v1.html +++ b/docs/dyn/iap_v1.v1.html @@ -190,7 +190,7 @@

Method Details

"policyName": { # An internal name for an IAM policy, based on the resource to which the policy applies. Not to be confused with a resource's external full resource name. For more information on this distinction, see go/iam-full-resource-names. # Policy name to be checked "id": "A String", # Identifies an instance of the type. ID format varies by type. The ID format is defined in the IAM .service file that defines the type, either in path_mapping or in a comment. "region": "A String", # For Cloud IAM: The location of the Policy. Must be empty or "global" for Policies owned by global IAM. Must name a region from prodspec/cloud-iam-cloudspec for Regional IAM Policies, see go/iam-faq#where-is-iam-currently-deployed. For Local IAM: This field should be set to "local". - "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'gce', 'gcs', 'project', 'account' etc. + "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'storage_buckets', 'compute_instances', 'resourcemanager_customers', 'billing_accounts', etc. }, "resource": { # IAM resource to check permission on "expectedNextState": { # The proto or JSON formatted expected next state of the resource, wrapped in a google.protobuf.Any proto, against which the policy rules are evaluated. Services not integrated with custom org policy can omit this field. Services integrated with custom org policy must populate this field for all requests where the API call changes the state of the resource. Custom org policy backend uses these attributes to enforce custom org policies. When a proto is wrapped, it is generally the One Platform API proto. When a JSON string is wrapped, use `google.protobuf.StringValue` for the inner value. For create operations, GCP service is expected to pass resource from customer request as is. For update/patch operations, GCP service is expected to compute the next state with the patch provided by the user. See go/custom-constraints-org-policy-integration-guide for additional details. @@ -371,7 +371,7 @@

Method Details

"policyName": { # An internal name for an IAM policy, based on the resource to which the policy applies. Not to be confused with a resource's external full resource name. For more information on this distinction, see go/iam-full-resource-names. # Policy name to be checked "id": "A String", # Identifies an instance of the type. ID format varies by type. The ID format is defined in the IAM .service file that defines the type, either in path_mapping or in a comment. "region": "A String", # For Cloud IAM: The location of the Policy. Must be empty or "global" for Policies owned by global IAM. Must name a region from prodspec/cloud-iam-cloudspec for Regional IAM Policies, see go/iam-faq#where-is-iam-currently-deployed. For Local IAM: This field should be set to "local". - "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'gce', 'gcs', 'project', 'account' etc. + "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'storage_buckets', 'compute_instances', 'resourcemanager_customers', 'billing_accounts', etc. }, "resource": { # IAM resource to check permission on "expectedNextState": { # The proto or JSON formatted expected next state of the resource, wrapped in a google.protobuf.Any proto, against which the policy rules are evaluated. Services not integrated with custom org policy can omit this field. Services integrated with custom org policy must populate this field for all requests where the API call changes the state of the resource. Custom org policy backend uses these attributes to enforce custom org policies. When a proto is wrapped, it is generally the One Platform API proto. When a JSON string is wrapped, use `google.protobuf.StringValue` for the inner value. For create operations, GCP service is expected to pass resource from customer request as is. For update/patch operations, GCP service is expected to compute the next state with the patch provided by the user. See go/custom-constraints-org-policy-integration-guide for additional details. @@ -463,7 +463,7 @@

Method Details

"policyName": { # An internal name for an IAM policy, based on the resource to which the policy applies. Not to be confused with a resource's external full resource name. For more information on this distinction, see go/iam-full-resource-names. # Policy name to be checked "id": "A String", # Identifies an instance of the type. ID format varies by type. The ID format is defined in the IAM .service file that defines the type, either in path_mapping or in a comment. "region": "A String", # For Cloud IAM: The location of the Policy. Must be empty or "global" for Policies owned by global IAM. Must name a region from prodspec/cloud-iam-cloudspec for Regional IAM Policies, see go/iam-faq#where-is-iam-currently-deployed. For Local IAM: This field should be set to "local". - "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'gce', 'gcs', 'project', 'account' etc. + "type": "A String", # Resource type. Types are defined in IAM's .service files. Valid values for type might be 'storage_buckets', 'compute_instances', 'resourcemanager_customers', 'billing_accounts', etc. }, "resource": { # IAM resource to check permission on "expectedNextState": { # The proto or JSON formatted expected next state of the resource, wrapped in a google.protobuf.Any proto, against which the policy rules are evaluated. Services not integrated with custom org policy can omit this field. Services integrated with custom org policy must populate this field for all requests where the API call changes the state of the resource. Custom org policy backend uses these attributes to enforce custom org policies. When a proto is wrapped, it is generally the One Platform API proto. When a JSON string is wrapped, use `google.protobuf.StringValue` for the inner value. For create operations, GCP service is expected to pass resource from customer request as is. For update/patch operations, GCP service is expected to compute the next state with the patch provided by the user. See go/custom-constraints-org-policy-integration-guide for additional details. diff --git a/docs/dyn/identitytoolkit_v2.projects.defaultSupportedIdpConfigs.html b/docs/dyn/identitytoolkit_v2.projects.defaultSupportedIdpConfigs.html index 99488688ce6..748b7c1a74c 100644 --- a/docs/dyn/identitytoolkit_v2.projects.defaultSupportedIdpConfigs.html +++ b/docs/dyn/identitytoolkit_v2.projects.defaultSupportedIdpConfigs.html @@ -115,7 +115,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -141,7 +141,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -191,7 +191,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -227,7 +227,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -271,7 +271,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -297,7 +297,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. diff --git a/docs/dyn/identitytoolkit_v2.projects.tenants.defaultSupportedIdpConfigs.html b/docs/dyn/identitytoolkit_v2.projects.tenants.defaultSupportedIdpConfigs.html index 290922268cc..0eedca1e0a9 100644 --- a/docs/dyn/identitytoolkit_v2.projects.tenants.defaultSupportedIdpConfigs.html +++ b/docs/dyn/identitytoolkit_v2.projects.tenants.defaultSupportedIdpConfigs.html @@ -115,7 +115,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -141,7 +141,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -191,7 +191,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -227,7 +227,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -271,7 +271,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. @@ -297,7 +297,7 @@

Method Details

"bundleIds": [ # A list of Bundle ID's usable by this project "A String", ], - "codeFlowConfig": { # Additional config for Apple for code flow. + "codeFlowConfig": { # Additional config for Apple for code flow. # Additional config for Apple for code flow. "keyId": "A String", # Key ID for the private key. "privateKey": "A String", # Private key used for signing the client secret JWT. "teamId": "A String", # Apple Developer Team ID. diff --git a/docs/dyn/integrations_v1.projects.locations.integrations.executions.html b/docs/dyn/integrations_v1.projects.locations.integrations.executions.html index babd67660ca..02d6ef553aa 100644 --- a/docs/dyn/integrations_v1.projects.locations.integrations.executions.html +++ b/docs/dyn/integrations_v1.projects.locations.integrations.executions.html @@ -94,6 +94,9 @@

Instance Methods

list_next()

Retrieves the next page of results.

+

+ replay(name, body=None, x__xgafv=None)

+

Re-execute an existing execution, with same request parameters and execution strategy

Method Details

close() @@ -150,8 +153,9 @@

Method Details

}, ], "eventExecutionSnapshot": [ - { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 13 + { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 15 "checkpointTaskNumber": "A String", # Indicates "right after which checkpoint task's execution" this snapshot is taken. + "clientId": "A String", # Client that the execution snapshot is associated to. "conditionResults": [ # All of the computed conditions that been calculated. { # Contains the combined condition calculation results. "currentTaskNumber": "A String", # the current task number. @@ -284,6 +288,7 @@

Method Details

}, ], "taskName": "A String", # The task name associated with this snapshot. Could be empty. + "workflowName": "A String", # Name of the workflow this event execution snapshot belongs to. }, ], "eventExecutionSnapshotsSize": "A String", # Total size of all event_execution_snapshots for an execution @@ -883,8 +888,9 @@

Method Details

}, ], "eventExecutionSnapshot": [ - { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 13 + { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 15 "checkpointTaskNumber": "A String", # Indicates "right after which checkpoint task's execution" this snapshot is taken. + "clientId": "A String", # Client that the execution snapshot is associated to. "conditionResults": [ # All of the computed conditions that been calculated. { # Contains the combined condition calculation results. "currentTaskNumber": "A String", # the current task number. @@ -1017,6 +1023,7 @@

Method Details

}, ], "taskName": "A String", # The task name associated with this snapshot. Could be empty. + "workflowName": "A String", # Name of the workflow this event execution snapshot belongs to. }, ], "eventExecutionSnapshotsSize": "A String", # Total size of all event_execution_snapshots for an execution @@ -1282,4 +1289,34 @@

Method Details

+
+ replay(name, body=None, x__xgafv=None) +
Re-execute an existing execution, with same request parameters and execution strategy
+
+Args:
+  name: string, Required. The execution resource name. Format: projects/{gcp_project_id}/locations/{location}/integrations/{integration}/executions/{execution_id} (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request for replaying an execution Next ID: 3
+  "replayReason": "A String", # Optional. The user provided reason for replaying the execution.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response for replaying an execution Next ID: 4
+  "executionId": "A String", # The id of the execution corresponding to this run of integration.
+  "outputParameters": { # OUTPUT parameters in format of Map. Where Key is the name of the parameter. The parameters would only be present in case of synchrounous execution Note: Name of the system generated parameters are wrapped by backtick(`) to distinguish them from the user defined parameters.
+    "a_key": "", # Properties of the object.
+  },
+  "replayedExecutionId": "A String", # The execution id which is replayed
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/integrations_v1.projects.locations.integrations.html b/docs/dyn/integrations_v1.projects.locations.integrations.html index 5844e033d1b..5174e81c3f0 100644 --- a/docs/dyn/integrations_v1.projects.locations.integrations.html +++ b/docs/dyn/integrations_v1.projects.locations.integrations.html @@ -928,6 +928,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -1033,6 +1049,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. diff --git a/docs/dyn/integrations_v1.projects.locations.integrations.versions.html b/docs/dyn/integrations_v1.projects.locations.integrations.versions.html index 45c097b0ad3..49af5b2a6f3 100644 --- a/docs/dyn/integrations_v1.projects.locations.integrations.versions.html +++ b/docs/dyn/integrations_v1.projects.locations.integrations.versions.html @@ -400,6 +400,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -505,6 +521,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -1415,6 +1447,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -1520,6 +1568,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -2499,6 +2563,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -2604,6 +2684,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -3523,6 +3619,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -3628,6 +3740,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -4550,6 +4678,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -4655,6 +4799,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -5583,6 +5743,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -5688,6 +5864,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -6597,6 +6789,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -6702,6 +6910,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -7677,6 +7901,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -7782,6 +8022,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. diff --git a/docs/dyn/integrations_v1.projects.locations.products.integrations.executions.html b/docs/dyn/integrations_v1.projects.locations.products.integrations.executions.html index d74f31c4a87..fe2574043fb 100644 --- a/docs/dyn/integrations_v1.projects.locations.products.integrations.executions.html +++ b/docs/dyn/integrations_v1.projects.locations.products.integrations.executions.html @@ -178,8 +178,9 @@

Method Details

}, ], "eventExecutionSnapshot": [ - { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 13 + { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 15 "checkpointTaskNumber": "A String", # Indicates "right after which checkpoint task's execution" this snapshot is taken. + "clientId": "A String", # Client that the execution snapshot is associated to. "conditionResults": [ # All of the computed conditions that been calculated. { # Contains the combined condition calculation results. "currentTaskNumber": "A String", # the current task number. @@ -312,6 +313,7 @@

Method Details

}, ], "taskName": "A String", # The task name associated with this snapshot. Could be empty. + "workflowName": "A String", # Name of the workflow this event execution snapshot belongs to. }, ], "eventExecutionSnapshotsSize": "A String", # Total size of all event_execution_snapshots for an execution @@ -911,8 +913,9 @@

Method Details

}, ], "eventExecutionSnapshot": [ - { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 13 + { # Contains the snapshot of the event execution for a given checkpoint. Next available id: 15 "checkpointTaskNumber": "A String", # Indicates "right after which checkpoint task's execution" this snapshot is taken. + "clientId": "A String", # Client that the execution snapshot is associated to. "conditionResults": [ # All of the computed conditions that been calculated. { # Contains the combined condition calculation results. "currentTaskNumber": "A String", # the current task number. @@ -1045,6 +1048,7 @@

Method Details

}, ], "taskName": "A String", # The task name associated with this snapshot. Could be empty. + "workflowName": "A String", # Name of the workflow this event execution snapshot belongs to. }, ], "eventExecutionSnapshotsSize": "A String", # Total size of all event_execution_snapshots for an execution diff --git a/docs/dyn/integrations_v1.projects.locations.products.integrations.html b/docs/dyn/integrations_v1.projects.locations.products.integrations.html index f2561f2ba68..6829b93b917 100644 --- a/docs/dyn/integrations_v1.projects.locations.products.integrations.html +++ b/docs/dyn/integrations_v1.projects.locations.products.integrations.html @@ -883,6 +883,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -988,6 +1004,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. diff --git a/docs/dyn/integrations_v1.projects.locations.products.integrations.versions.html b/docs/dyn/integrations_v1.projects.locations.products.integrations.versions.html index 28c7fc206ff..ff348eb9286 100644 --- a/docs/dyn/integrations_v1.projects.locations.products.integrations.versions.html +++ b/docs/dyn/integrations_v1.projects.locations.products.integrations.versions.html @@ -400,6 +400,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -505,6 +521,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -1415,6 +1447,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -1520,6 +1568,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -2488,6 +2552,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -2593,6 +2673,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -3515,6 +3611,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -3620,6 +3732,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -4548,6 +4676,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -4653,6 +4797,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -5562,6 +5722,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -5667,6 +5843,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -6616,6 +6808,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -6721,6 +6929,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. @@ -7670,6 +7894,22 @@

Method Details

"status": "A String", # Output only. Generated by eventbus. User should not set it as an input. "taskConfigs": [ # Optional. Task configuration for the integration. It's optional, but the integration doesn't do anything without task_configs. { # The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task. + "conditionalFailurePolicies": { # Conditional task failur retry strategies # Optional. The list of conditional failure policies that will be applied to the task in order. + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches. + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "condition": "A String", # Optional. The string condition that will be evaluated to determine if the task should be retried with this failure policy. + "intervalTime": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the initial interval in seconds for backoff. + "maxRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_INTEGRATION_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "description": "A String", # Optional. User-provided description intended to give additional business context about the task. "displayName": "A String", # Optional. User-provided label that is attached to this TaskConfig in the UI. "errorCatcherId": "A String", # Optional. Optional Error catcher id of the error catch flow which will be executed when execution error happens in the task @@ -7775,6 +8015,22 @@

Method Details

}, }, ], + "conditionalFailurePolicies": { # Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post). + "defaultFailurePolicy": { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). # The default failure policy to be applied if no conditional failure policy matches + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + "failurePolicies": [ # The list of failure policies that will be applied to the task in order. + { # Policy that defines the task retry logic and failure type. If no FailurePolicy is defined for a task, all its dependent tasks will not be executed (i.e, a `retry_strategy` of NONE will be applied). + "intervalInSeconds": "A String", # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the initial interval for backoff. + "maxNumRetries": 42, # Required if retry_strategy is FIXED_INTERVAL or LINEAR/EXPONENTIAL_BACKOFF/RESTART_WORKFLOW_WITH_BACKOFF. Defines the number of times the task will be retried if failed. + "retryCondition": "A String", # Optional. The retry condition that will be evaluated for this failure policy with the corresponding retry strategy. + "retryStrategy": "A String", # Defines what happens to the task upon failure. + }, + ], + }, "createTime": "A String", # Auto-generated. "creatorEmail": "A String", # The creator's email address. Auto-generated from the user's email. "description": "A String", # User-provided description intended to give more business context about the task. diff --git a/docs/dyn/migrationcenter_v1alpha1.projects.locations.assetsExportJobs.html b/docs/dyn/migrationcenter_v1alpha1.projects.locations.assetsExportJobs.html new file mode 100644 index 00000000000..8f0f364be08 --- /dev/null +++ b/docs/dyn/migrationcenter_v1alpha1.projects.locations.assetsExportJobs.html @@ -0,0 +1,404 @@ + + + +

Migration Center API . projects . locations . assetsExportJobs

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ create(parent, assetsExportJobId=None, body=None, requestId=None, x__xgafv=None)

+

Creates a new assets export job.

+

+ delete(name, x__xgafv=None)

+

Deletes an assets export job.

+

+ get(name, x__xgafv=None)

+

Gets the details of an assets export job.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all the assets export jobs in a given project and location.

+

+ list_next()

+

Retrieves the next page of results.

+

+ run(name, body=None, x__xgafv=None)

+

Runs an assets export job, returning an AssetsExportJobExecution.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ create(parent, assetsExportJobId=None, body=None, requestId=None, x__xgafv=None) +
Creates a new assets export job.
+
+Args:
+  parent: string, Required. The parent resource where the assts export job will be created. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Assets export job message.
+  "condition": { # Conditions for selecting assets to export. # Optional. Conditions for selecting assets to export.
+    "filter": "A String", # Optional. Assets filter, supports the same syntax as asset listing.
+  },
+  "createTime": "A String", # Output only. Resource creation time.
+  "labels": { # Optional. Labels as key value pairs. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.
+    "a_key": "A String",
+  },
+  "name": "A String", # Output only. Identifier. Resource name.
+  "networkDependencies": { # Configuration for network dependencies exports. # Export data regarding asset network dependencies.
+    "maxDays": 42, # Optional. When this value is set to a positive integer, network connections data will be returned for the most recent days for which data is available. When this value is unset (or set to zero), all available data is returned.
+  },
+  "recentExecutions": [ # Output only. Recent non expired executions of the job.
+    { # Execution status of assets export job.
+      "endTime": "A String", # Output only. Completion time of the export.
+      "executionId": "A String", # Output only. Globally unique identifier of the execution.
+      "expireTime": "A String", # Output only. Expiration time for the export and artifacts.
+      "result": { # Contains the result of the assets export. # Output only. Result of the export execution.
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error encountered during export.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "signedUris": { # Contains a list of Signed URIs. # Output only. Signed URLs for downloading export artifacts.
+          "signedUris": [ # Output only. List of signed URIs.
+            { # Contains a signed URI.
+              "file": "A String", # Output only. Name of the file the Signed URI references.
+              "uri": "A String", # Output only. Download URI for the file.
+            },
+          ],
+        },
+      },
+      "startTime": "A String", # Output only. Execution timestamp.
+    },
+  ],
+  "signedUriDestination": { # Signed URI destination configuration. # Export to Cloud Storage files downloadable using signed URIs.
+  },
+  "updateTime": "A String", # Output only. Resource update time.
+}
+
+  assetsExportJobId: string, Required. The ID to use for the asset export job.
+  requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ +
+ delete(name, x__xgafv=None) +
Deletes an assets export job.
+
+Args:
+  name: string, Required. The name of the assets export job to delete. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets the details of an assets export job.
+
+Args:
+  name: string, Required. Name of the resource. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Assets export job message.
+  "condition": { # Conditions for selecting assets to export. # Optional. Conditions for selecting assets to export.
+    "filter": "A String", # Optional. Assets filter, supports the same syntax as asset listing.
+  },
+  "createTime": "A String", # Output only. Resource creation time.
+  "labels": { # Optional. Labels as key value pairs. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.
+    "a_key": "A String",
+  },
+  "name": "A String", # Output only. Identifier. Resource name.
+  "networkDependencies": { # Configuration for network dependencies exports. # Export data regarding asset network dependencies.
+    "maxDays": 42, # Optional. When this value is set to a positive integer, network connections data will be returned for the most recent days for which data is available. When this value is unset (or set to zero), all available data is returned.
+  },
+  "recentExecutions": [ # Output only. Recent non expired executions of the job.
+    { # Execution status of assets export job.
+      "endTime": "A String", # Output only. Completion time of the export.
+      "executionId": "A String", # Output only. Globally unique identifier of the execution.
+      "expireTime": "A String", # Output only. Expiration time for the export and artifacts.
+      "result": { # Contains the result of the assets export. # Output only. Result of the export execution.
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error encountered during export.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "signedUris": { # Contains a list of Signed URIs. # Output only. Signed URLs for downloading export artifacts.
+          "signedUris": [ # Output only. List of signed URIs.
+            { # Contains a signed URI.
+              "file": "A String", # Output only. Name of the file the Signed URI references.
+              "uri": "A String", # Output only. Download URI for the file.
+            },
+          ],
+        },
+      },
+      "startTime": "A String", # Output only. Execution timestamp.
+    },
+  ],
+  "signedUriDestination": { # Signed URI destination configuration. # Export to Cloud Storage files downloadable using signed URIs.
+  },
+  "updateTime": "A String", # Output only. Resource update time.
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all the assets export jobs in a given project and location.
+
+Args:
+  parent: string, Required. Parent resource. (required)
+  pageSize: integer, Optional. Requested page size. The server may return fewer items than requested. If unspecified, the server will pick an appropriate default value.
+  pageToken: string, Optional. A token identifying a page of results that the server should return.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for listing assets export jobs.
+  "assetsExportJobs": [ # Output only. The list of assets export jobs.
+    { # Assets export job message.
+      "condition": { # Conditions for selecting assets to export. # Optional. Conditions for selecting assets to export.
+        "filter": "A String", # Optional. Assets filter, supports the same syntax as asset listing.
+      },
+      "createTime": "A String", # Output only. Resource creation time.
+      "labels": { # Optional. Labels as key value pairs. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.
+        "a_key": "A String",
+      },
+      "name": "A String", # Output only. Identifier. Resource name.
+      "networkDependencies": { # Configuration for network dependencies exports. # Export data regarding asset network dependencies.
+        "maxDays": 42, # Optional. When this value is set to a positive integer, network connections data will be returned for the most recent days for which data is available. When this value is unset (or set to zero), all available data is returned.
+      },
+      "recentExecutions": [ # Output only. Recent non expired executions of the job.
+        { # Execution status of assets export job.
+          "endTime": "A String", # Output only. Completion time of the export.
+          "executionId": "A String", # Output only. Globally unique identifier of the execution.
+          "expireTime": "A String", # Output only. Expiration time for the export and artifacts.
+          "result": { # Contains the result of the assets export. # Output only. Result of the export execution.
+            "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Output only. Error encountered during export.
+              "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+              "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+                {
+                  "a_key": "", # Properties of the object. Contains field @type with type URL.
+                },
+              ],
+              "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+            },
+            "signedUris": { # Contains a list of Signed URIs. # Output only. Signed URLs for downloading export artifacts.
+              "signedUris": [ # Output only. List of signed URIs.
+                { # Contains a signed URI.
+                  "file": "A String", # Output only. Name of the file the Signed URI references.
+                  "uri": "A String", # Output only. Download URI for the file.
+                },
+              ],
+            },
+          },
+          "startTime": "A String", # Output only. Execution timestamp.
+        },
+      ],
+      "signedUriDestination": { # Signed URI destination configuration. # Export to Cloud Storage files downloadable using signed URIs.
+      },
+      "updateTime": "A String", # Output only. Resource update time.
+    },
+  ],
+  "nextPageToken": "A String", # Output only. A token identifying a page of results the server should return.
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ run(name, body=None, x__xgafv=None) +
Runs an assets export job, returning an AssetsExportJobExecution.
+
+Args:
+  name: string, Required. Name of the resource. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A request to run an assets export job.
+  "requestId": "A String", # Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/migrationcenter_v1alpha1.projects.locations.html b/docs/dyn/migrationcenter_v1alpha1.projects.locations.html index 07f0920af74..fac4a405dff 100644 --- a/docs/dyn/migrationcenter_v1alpha1.projects.locations.html +++ b/docs/dyn/migrationcenter_v1alpha1.projects.locations.html @@ -79,6 +79,11 @@

Instance Methods

Returns the assets Resource.

+

+ assetsExportJobs() +

+

Returns the assetsExportJobs Resource.

+

discoveryClients()

diff --git a/docs/dyn/migrationcenter_v1alpha1.projects.locations.preferenceSets.html b/docs/dyn/migrationcenter_v1alpha1.projects.locations.preferenceSets.html index c2fdb5d85c7..1a626c84d7b 100644 --- a/docs/dyn/migrationcenter_v1alpha1.projects.locations.preferenceSets.html +++ b/docs/dyn/migrationcenter_v1alpha1.projects.locations.preferenceSets.html @@ -164,7 +164,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -344,7 +344,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -464,7 +464,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -592,7 +592,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). diff --git a/docs/dyn/migrationcenter_v1alpha1.projects.locations.reportConfigs.reports.html b/docs/dyn/migrationcenter_v1alpha1.projects.locations.reportConfigs.reports.html index 3785c024c48..efc00af502a 100644 --- a/docs/dyn/migrationcenter_v1alpha1.projects.locations.reportConfigs.reports.html +++ b/docs/dyn/migrationcenter_v1alpha1.projects.locations.reportConfigs.reports.html @@ -393,7 +393,7 @@

Method Details

"machinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -530,7 +530,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -1070,7 +1070,7 @@

Method Details

"machinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -1207,7 +1207,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -1688,7 +1688,7 @@

Method Details

"machinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). @@ -1825,7 +1825,7 @@

Method Details

"virtualMachinePreferences": { # VirtualMachinePreferences enables you to create sets of preferences, for example, a geographical location and pricing track, for your migrated virtual machines. The set of preferences influence recommendations for migrating virtual machine assets. # A set of preferences that applies to all virtual machines in the context. "commitmentPlan": "A String", # Commitment plan to consider when calculating costs for virtual machine insights and recommendations. If you are unsure which value to set, a 3 year commitment plan is often a good value to start with. "computeEnginePreferences": { # The user preferences relating to Compute Engine target platform. # Compute Engine preferences concern insights and recommendations for Compute Engine target. - "licenseType": "A String", # Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. + "licenseType": "A String", # If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan. "machinePreferences": { # The type of machines to consider when calculating virtual machine migration insights and recommendations. Not all machine types are available in all zones and regions. # Preferences concerning the machine types to consider on Compute Engine. "allowedMachineSeries": [ # Compute Engine machine series to consider for insights and recommendations. If empty, no restriction is applied on the machine series. { # A machine series, for a target product (e.g. Compute Engine, Google Cloud VMware Engine). diff --git a/docs/dyn/monitoring_v3.uptimeCheckIps.html b/docs/dyn/monitoring_v3.uptimeCheckIps.html index fd28094e48d..41ae3babc34 100644 --- a/docs/dyn/monitoring_v3.uptimeCheckIps.html +++ b/docs/dyn/monitoring_v3.uptimeCheckIps.html @@ -79,7 +79,7 @@

Instance Methods

Close httplib2 connections.

list(pageSize=None, pageToken=None, x__xgafv=None)

-

Returns the list of IP addresses that checkers run from

+

Returns the list of IP addresses that checkers run from.

list_next()

Retrieves the next page of results.

@@ -91,7 +91,7 @@

Method Details

list(pageSize=None, pageToken=None, x__xgafv=None) -
Returns the list of IP addresses that checkers run from
+  
Returns the list of IP addresses that checkers run from.
 
 Args:
   pageSize: integer, The maximum number of results to return in a single response. The server may further constrain the maximum number of results returned in a single page. If the page_size is <=0, the server will decide the number of results to be returned. NOTE: this field is not yet implemented
diff --git a/docs/dyn/networkconnectivity_v1.projects.locations.global_.hubs.routeTables.routes.html b/docs/dyn/networkconnectivity_v1.projects.locations.global_.hubs.routeTables.routes.html
index 490b4e7192c..6c571119483 100644
--- a/docs/dyn/networkconnectivity_v1.projects.locations.global_.hubs.routeTables.routes.html
+++ b/docs/dyn/networkconnectivity_v1.projects.locations.global_.hubs.routeTables.routes.html
@@ -115,9 +115,25 @@ 

Method Details

}, "location": "A String", # Output only. The origin location of the route. Uses the following form: "projects/{project}/locations/{location}" Example: projects/1234/locations/us-central1 "name": "A String", # Immutable. The name of the route. Route names must be unique. Route names use the following form: `projects/{project_number}/locations/global/hubs/{hub}/routeTables/{route_table_id}/routes/{route_id}` + "nextHopInterconnectAttachment": { # A route next hop that leads to an interconnect attachment resource. # Immutable. The next-hop VLAN attachment for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this interconnect attachment resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the interconnect attachment resource. + "vpcNetwork": "A String", # The VPC network where this interconnect attachment is located. + }, + "nextHopRouterApplianceInstance": { # A route next hop that leads to a Router appliance instance. # Immutable. The next-hop Router appliance instance for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this Router appliance instance resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the Router appliance instance. + "vpcNetwork": "A String", # The VPC network where this VM is located. + }, "nextHopVpcNetwork": { # Immutable. The destination VPC network for packets on this route. "uri": "A String", # The URI of the VPC network resource }, + "nextHopVpnTunnel": { # A route next hop that leads to a VPN tunnel resource. # Immutable. The next-hop VPN tunnel for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this VPN tunnel resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the VPN tunnel resource. + "vpcNetwork": "A String", # The VPC network where this VPN tunnel is located. + }, + "priority": "A String", # Output only. The priority of this route. Priority is used to break ties in cases where a destination matches more than one route. In these cases the route with the lowest-numbered priority value wins. "spoke": "A String", # Immutable. The spoke that this route leads to. Example: projects/12345/locations/global/spokes/SPOKE "state": "A String", # Output only. The current lifecycle state of the route. "type": "A String", # Output only. The route's type. Its type is determined by the properties of its IP address range. @@ -156,9 +172,25 @@

Method Details

}, "location": "A String", # Output only. The origin location of the route. Uses the following form: "projects/{project}/locations/{location}" Example: projects/1234/locations/us-central1 "name": "A String", # Immutable. The name of the route. Route names must be unique. Route names use the following form: `projects/{project_number}/locations/global/hubs/{hub}/routeTables/{route_table_id}/routes/{route_id}` + "nextHopInterconnectAttachment": { # A route next hop that leads to an interconnect attachment resource. # Immutable. The next-hop VLAN attachment for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this interconnect attachment resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the interconnect attachment resource. + "vpcNetwork": "A String", # The VPC network where this interconnect attachment is located. + }, + "nextHopRouterApplianceInstance": { # A route next hop that leads to a Router appliance instance. # Immutable. The next-hop Router appliance instance for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this Router appliance instance resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the Router appliance instance. + "vpcNetwork": "A String", # The VPC network where this VM is located. + }, "nextHopVpcNetwork": { # Immutable. The destination VPC network for packets on this route. "uri": "A String", # The URI of the VPC network resource }, + "nextHopVpnTunnel": { # A route next hop that leads to a VPN tunnel resource. # Immutable. The next-hop VPN tunnel for packets on this route. + "siteToSiteDataTransfer": True or False, # Indicates whether site-to-site data transfer is allowed for this VPN tunnel resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations). + "uri": "A String", # The URI of the VPN tunnel resource. + "vpcNetwork": "A String", # The VPC network where this VPN tunnel is located. + }, + "priority": "A String", # Output only. The priority of this route. Priority is used to break ties in cases where a destination matches more than one route. In these cases the route with the lowest-numbered priority value wins. "spoke": "A String", # Immutable. The spoke that this route leads to. Example: projects/12345/locations/global/spokes/SPOKE "state": "A String", # Output only. The current lifecycle state of the route. "type": "A String", # Output only. The route's type. Its type is determined by the properties of its IP address range. diff --git a/docs/dyn/networkconnectivity_v1.projects.locations.global_.policyBasedRoutes.html b/docs/dyn/networkconnectivity_v1.projects.locations.global_.policyBasedRoutes.html index bc987a82779..0fa1eceea52 100644 --- a/docs/dyn/networkconnectivity_v1.projects.locations.global_.policyBasedRoutes.html +++ b/docs/dyn/networkconnectivity_v1.projects.locations.global_.policyBasedRoutes.html @@ -116,13 +116,13 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always take precedence. +{ # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always takes precedence. "createTime": "A String", # Output only. Time when the policy-based route was created. "description": "A String", # Optional. An optional description of this resource. Provide this field when you create the resource. "filter": { # Filter matches L4 traffic. # Required. The filter to match L4 traffic. "destRange": "A String", # Optional. The destination IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. "ipProtocol": "A String", # Optional. The IP protocol that this policy-based route applies to. Valid values are 'TCP', 'UDP', and 'ALL'. Default is 'ALL'. - "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. + "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. IPV6 is supported in preview. "srcRange": "A String", # Optional. The source IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. }, "interconnectAttachment": { # InterconnectAttachment that this route applies to. # Optional. The interconnect attachments that this policy-based route applies to. @@ -139,8 +139,8 @@

Method Details

"priority": 42, # Optional. The priority of this policy-based route. Priority is used to break ties in cases where there are more than one matching policy-based routes found. In cases where multiple policy-based routes are matched, the one with the lowest-numbered priority value wins. The default value is 1000. The priority value must be from 1 to 65535, inclusive. "selfLink": "A String", # Output only. Server-defined fully-qualified URL for this resource. "updateTime": "A String", # Output only. Time when the policy-based route was updated. - "virtualMachine": { # VM instances to which this policy-based route applies to. # Optional. VM instances to which this policy-based route applies to. - "tags": [ # Optional. A list of VM instance tags the this policy-based route applies to. VM instances that have ANY of tags specified here will install this PBR. + "virtualMachine": { # VM instances that this policy-based route applies to. # Optional. VM instances that this policy-based route applies to. + "tags": [ # Optional. A list of VM instance tags that this policy-based route applies to. VM instances that have ANY of tags specified here installs this PBR. "A String", ], }, @@ -156,7 +156,7 @@

Method Details

} policyBasedRouteId: string, Required. Unique id for the policy-based route to create. - requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes since the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000). + requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server knows to ignore the request if it has already been completed. The server guarantees that for at least 60 minutes since the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, ignores the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -192,7 +192,7 @@

Method Details

Args: name: string, Required. Name of the policy-based route resource to delete. (required) - requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000). + requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server knows to ignore the request if it has already been completed. The server guarantees that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, ignores the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000). x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -236,13 +236,13 @@

Method Details

Returns: An object of the form: - { # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always take precedence. + { # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always takes precedence. "createTime": "A String", # Output only. Time when the policy-based route was created. "description": "A String", # Optional. An optional description of this resource. Provide this field when you create the resource. "filter": { # Filter matches L4 traffic. # Required. The filter to match L4 traffic. "destRange": "A String", # Optional. The destination IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. "ipProtocol": "A String", # Optional. The IP protocol that this policy-based route applies to. Valid values are 'TCP', 'UDP', and 'ALL'. Default is 'ALL'. - "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. + "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. IPV6 is supported in preview. "srcRange": "A String", # Optional. The source IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. }, "interconnectAttachment": { # InterconnectAttachment that this route applies to. # Optional. The interconnect attachments that this policy-based route applies to. @@ -259,8 +259,8 @@

Method Details

"priority": 42, # Optional. The priority of this policy-based route. Priority is used to break ties in cases where there are more than one matching policy-based routes found. In cases where multiple policy-based routes are matched, the one with the lowest-numbered priority value wins. The default value is 1000. The priority value must be from 1 to 65535, inclusive. "selfLink": "A String", # Output only. Server-defined fully-qualified URL for this resource. "updateTime": "A String", # Output only. Time when the policy-based route was updated. - "virtualMachine": { # VM instances to which this policy-based route applies to. # Optional. VM instances to which this policy-based route applies to. - "tags": [ # Optional. A list of VM instance tags the this policy-based route applies to. VM instances that have ANY of tags specified here will install this PBR. + "virtualMachine": { # VM instances that this policy-based route applies to. # Optional. VM instances that this policy-based route applies to. + "tags": [ # Optional. A list of VM instance tags that this policy-based route applies to. VM instances that have ANY of tags specified here installs this PBR. "A String", ], }, @@ -345,13 +345,13 @@

Method Details

{ # Response for PolicyBasedRouting.ListPolicyBasedRoutes method. "nextPageToken": "A String", # The next pagination token in the List response. It should be used as page_token for the following request. An empty value means no more result. "policyBasedRoutes": [ # Policy-based routes to be returned. - { # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always take precedence. + { # Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always takes precedence. "createTime": "A String", # Output only. Time when the policy-based route was created. "description": "A String", # Optional. An optional description of this resource. Provide this field when you create the resource. "filter": { # Filter matches L4 traffic. # Required. The filter to match L4 traffic. "destRange": "A String", # Optional. The destination IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. "ipProtocol": "A String", # Optional. The IP protocol that this policy-based route applies to. Valid values are 'TCP', 'UDP', and 'ALL'. Default is 'ALL'. - "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. + "protocolVersion": "A String", # Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. IPV6 is supported in preview. "srcRange": "A String", # Optional. The source IP range of outgoing packets that this policy-based route applies to. Default is "0.0.0.0/0" if protocol version is IPv4. }, "interconnectAttachment": { # InterconnectAttachment that this route applies to. # Optional. The interconnect attachments that this policy-based route applies to. @@ -368,8 +368,8 @@

Method Details

"priority": 42, # Optional. The priority of this policy-based route. Priority is used to break ties in cases where there are more than one matching policy-based routes found. In cases where multiple policy-based routes are matched, the one with the lowest-numbered priority value wins. The default value is 1000. The priority value must be from 1 to 65535, inclusive. "selfLink": "A String", # Output only. Server-defined fully-qualified URL for this resource. "updateTime": "A String", # Output only. Time when the policy-based route was updated. - "virtualMachine": { # VM instances to which this policy-based route applies to. # Optional. VM instances to which this policy-based route applies to. - "tags": [ # Optional. A list of VM instance tags the this policy-based route applies to. VM instances that have ANY of tags specified here will install this PBR. + "virtualMachine": { # VM instances that this policy-based route applies to. # Optional. VM instances that this policy-based route applies to. + "tags": [ # Optional. A list of VM instance tags that this policy-based route applies to. VM instances that have ANY of tags specified here installs this PBR. "A String", ], }, diff --git a/docs/dyn/networkconnectivity_v1.projects.locations.serviceConnectionMaps.html b/docs/dyn/networkconnectivity_v1.projects.locations.serviceConnectionMaps.html index dba0061b60f..ed966f4164b 100644 --- a/docs/dyn/networkconnectivity_v1.projects.locations.serviceConnectionMaps.html +++ b/docs/dyn/networkconnectivity_v1.projects.locations.serviceConnectionMaps.html @@ -126,6 +126,9 @@

Method Details

"network": "A String", # The resource path of the consumer network where PSC connections are allowed to be created in. Note, this network does not need be in the ConsumerPscConfig.project in the case of SharedVPC. Example: projects/{projectNumOrId}/global/networks/{networkId}. "producerInstanceId": "A String", # Immutable. An immutable identifier for the producer instance. "project": "A String", # The consumer project where PSC connections are allowed to be created in. + "serviceAttachmentIpAddressMap": { # Output only. A map to store mapping between customer vip and target service attachment. Only service attachment with producer specified ip addresses are stored here. + "a_key": "A String", + }, "state": "A String", # Output only. Overall state of PSC Connections management for this consumer psc config. }, ], @@ -268,6 +271,9 @@

Method Details

"network": "A String", # The resource path of the consumer network where PSC connections are allowed to be created in. Note, this network does not need be in the ConsumerPscConfig.project in the case of SharedVPC. Example: projects/{projectNumOrId}/global/networks/{networkId}. "producerInstanceId": "A String", # Immutable. An immutable identifier for the producer instance. "project": "A String", # The consumer project where PSC connections are allowed to be created in. + "serviceAttachmentIpAddressMap": { # Output only. A map to store mapping between customer vip and target service attachment. Only service attachment with producer specified ip addresses are stored here. + "a_key": "A String", + }, "state": "A String", # Output only. Overall state of PSC Connections management for this consumer psc config. }, ], @@ -398,6 +404,9 @@

Method Details

"network": "A String", # The resource path of the consumer network where PSC connections are allowed to be created in. Note, this network does not need be in the ConsumerPscConfig.project in the case of SharedVPC. Example: projects/{projectNumOrId}/global/networks/{networkId}. "producerInstanceId": "A String", # Immutable. An immutable identifier for the producer instance. "project": "A String", # The consumer project where PSC connections are allowed to be created in. + "serviceAttachmentIpAddressMap": { # Output only. A map to store mapping between customer vip and target service attachment. Only service attachment with producer specified ip addresses are stored here. + "a_key": "A String", + }, "state": "A String", # Output only. Overall state of PSC Connections management for this consumer psc config. }, ], @@ -487,6 +496,9 @@

Method Details

"network": "A String", # The resource path of the consumer network where PSC connections are allowed to be created in. Note, this network does not need be in the ConsumerPscConfig.project in the case of SharedVPC. Example: projects/{projectNumOrId}/global/networks/{networkId}. "producerInstanceId": "A String", # Immutable. An immutable identifier for the producer instance. "project": "A String", # The consumer project where PSC connections are allowed to be created in. + "serviceAttachmentIpAddressMap": { # Output only. A map to store mapping between customer vip and target service attachment. Only service attachment with producer specified ip addresses are stored here. + "a_key": "A String", + }, "state": "A String", # Output only. Overall state of PSC Connections management for this consumer psc config. }, ], diff --git a/docs/dyn/policyanalyzer_v1.folders.html b/docs/dyn/policyanalyzer_v1.folders.html new file mode 100644 index 00000000000..db2c1ecffba --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.folders.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders

+

Instance Methods

+

+ locations() +

+

Returns the locations Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.activities.html new file mode 100644 index 00000000000..08e5ff91b04 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.activities.html @@ -0,0 +1,141 @@ + + + +

Policy Analyzer API . folders . locations . activityTypes . activities

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Queries policy activities on Google Cloud resources.

+

+ query_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Queries policy activities on Google Cloud resources.
+
+Args:
+  parent: string, Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to Google Cloud Locations: https://cloud.google.com/about/locations/ (required)
+  filter: string, Optional. Filter expression to restrict the activities returned. For serviceAccountLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account. For serviceAccountKeyLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account key.
+  pageSize: integer, Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.
+  pageToken: string, Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response to the `QueryActivity` method.
+  "activities": [ # The set of activities that match the filter included in the request.
+    { # Represents Activity on a GCP resource over specific observation period.
+      "activity": { # A struct of custom fields to explain the activity.
+        "a_key": "", # Properties of the object.
+      },
+      "activityType": "A String", # The type of the activity.
+      "fullResourceName": "A String", # The full resource name that identifies the resource. For examples of full resource names for Google Cloud services, see https://cloud.google.com/iam/help/troubleshooter/full-resource-names.
+      "observationPeriod": { # Represents data observation period. # The data observation period to build the activity.
+        "endTime": "A String", # The observation end time. The time in this timestamp is always `07:00:00Z`.
+        "startTime": "A String", # The observation start time. The time in this timestamp is always `07:00:00Z`.
+      },
+    },
+  ],
+  "nextPageToken": "A String", # If there might be more results than those appearing in this response, then `nextPageToken` is included. To get the next set of results, call this method again using the value of `nextPageToken` as `pageToken`.
+}
+
+ +
+ query_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.html b/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.html new file mode 100644 index 00000000000..12ff19857db --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.folders.locations.activityTypes.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders . locations . activityTypes

+

Instance Methods

+

+ activities() +

+

Returns the activities Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.folders.locations.html b/docs/dyn/policyanalyzer_v1.folders.locations.html new file mode 100644 index 00000000000..884d05a6cf2 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.folders.locations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders . locations

+

Instance Methods

+

+ activityTypes() +

+

Returns the activityTypes Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.html b/docs/dyn/policyanalyzer_v1.html index cb82e3e1dcd..f8d37e71404 100644 --- a/docs/dyn/policyanalyzer_v1.html +++ b/docs/dyn/policyanalyzer_v1.html @@ -74,6 +74,16 @@

Policy Analyzer API

Instance Methods

+

+ folders() +

+

Returns the folders Resource.

+ +

+ organizations() +

+

Returns the organizations Resource.

+

projects()

diff --git a/docs/dyn/policyanalyzer_v1.organizations.html b/docs/dyn/policyanalyzer_v1.organizations.html new file mode 100644 index 00000000000..7a67f9b3a4d --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.organizations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations

+

Instance Methods

+

+ locations() +

+

Returns the locations Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.activities.html new file mode 100644 index 00000000000..658d06641c4 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.activities.html @@ -0,0 +1,141 @@ + + + +

Policy Analyzer API . organizations . locations . activityTypes . activities

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Queries policy activities on Google Cloud resources.

+

+ query_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Queries policy activities on Google Cloud resources.
+
+Args:
+  parent: string, Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to Google Cloud Locations: https://cloud.google.com/about/locations/ (required)
+  filter: string, Optional. Filter expression to restrict the activities returned. For serviceAccountLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account. For serviceAccountKeyLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account key.
+  pageSize: integer, Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.
+  pageToken: string, Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response to the `QueryActivity` method.
+  "activities": [ # The set of activities that match the filter included in the request.
+    { # Represents Activity on a GCP resource over specific observation period.
+      "activity": { # A struct of custom fields to explain the activity.
+        "a_key": "", # Properties of the object.
+      },
+      "activityType": "A String", # The type of the activity.
+      "fullResourceName": "A String", # The full resource name that identifies the resource. For examples of full resource names for Google Cloud services, see https://cloud.google.com/iam/help/troubleshooter/full-resource-names.
+      "observationPeriod": { # Represents data observation period. # The data observation period to build the activity.
+        "endTime": "A String", # The observation end time. The time in this timestamp is always `07:00:00Z`.
+        "startTime": "A String", # The observation start time. The time in this timestamp is always `07:00:00Z`.
+      },
+    },
+  ],
+  "nextPageToken": "A String", # If there might be more results than those appearing in this response, then `nextPageToken` is included. To get the next set of results, call this method again using the value of `nextPageToken` as `pageToken`.
+}
+
+ +
+ query_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.html b/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.html new file mode 100644 index 00000000000..7560d0a4fb8 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.organizations.locations.activityTypes.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations . locations . activityTypes

+

Instance Methods

+

+ activities() +

+

Returns the activities Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.organizations.locations.html b/docs/dyn/policyanalyzer_v1.organizations.locations.html new file mode 100644 index 00000000000..cab7296d2f4 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1.organizations.locations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations . locations

+

Instance Methods

+

+ activityTypes() +

+

Returns the activityTypes Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html index 3918e63224c..16ee91bc860 100644 --- a/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html +++ b/docs/dyn/policyanalyzer_v1.projects.locations.activityTypes.activities.html @@ -108,7 +108,7 @@

Method Details

{ # Response to the `QueryActivity` method. "activities": [ # The set of activities that match the filter included in the request. - { + { # Represents Activity on a GCP resource over specific observation period. "activity": { # A struct of custom fields to explain the activity. "a_key": "", # Properties of the object. }, diff --git a/docs/dyn/policyanalyzer_v1beta1.folders.html b/docs/dyn/policyanalyzer_v1beta1.folders.html new file mode 100644 index 00000000000..2039d6597df --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.folders.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders

+

Instance Methods

+

+ locations() +

+

Returns the locations Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.activities.html new file mode 100644 index 00000000000..77ff961b749 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.activities.html @@ -0,0 +1,141 @@ + + + +

Policy Analyzer API . folders . locations . activityTypes . activities

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Queries policy activities on GCP resources.

+

+ query_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Queries policy activities on GCP resources.
+
+Args:
+  parent: string, Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to GCP Locations: https://cloud.google.com/about/locations/ (required)
+  filter: string, Optional. Optional filter expression to restrict the activities returned. Supported filters are: - service_account_last_authn.full_resource_name {=} - service_account_key_last_authn.full_resource_name {=} 
+  pageSize: integer, Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.
+  pageToken: string, Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response to the `QueryActivity` method.
+  "activities": [ # The set of activities that match the filter included in the request.
+    { # Represents Activity on a GCP resource over specific observation period.
+      "activity": { # A struct of custom fields to explain the activity.
+        "a_key": "", # Properties of the object.
+      },
+      "activityType": "A String", # The type of the activity.
+      "fullResourceName": "A String", # The full resource name that identifies the resource. For examples of full resource names for Google Cloud services, see https://cloud.google.com/iam/help/troubleshooter/full-resource-names.
+      "observationPeriod": { # Represents data observation period. # The data observation period to build the activity.
+        "endTime": "A String", # The observation end time.
+        "startTime": "A String", # The observation start time.
+      },
+    },
+  ],
+  "nextPageToken": "A String", # If there might be more results than those appearing in this response, then `nextPageToken` is included. To get the next set of results, call this method again using the value of `nextPageToken` as `pageToken`.
+}
+
+ +
+ query_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.html b/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.html new file mode 100644 index 00000000000..7ef8a5a63b5 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.folders.locations.activityTypes.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders . locations . activityTypes

+

Instance Methods

+

+ activities() +

+

Returns the activities Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.folders.locations.html b/docs/dyn/policyanalyzer_v1beta1.folders.locations.html new file mode 100644 index 00000000000..18443ff6ae2 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.folders.locations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . folders . locations

+

Instance Methods

+

+ activityTypes() +

+

Returns the activityTypes Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.html b/docs/dyn/policyanalyzer_v1beta1.html index 5890c699fb3..dd92da37e80 100644 --- a/docs/dyn/policyanalyzer_v1beta1.html +++ b/docs/dyn/policyanalyzer_v1beta1.html @@ -74,6 +74,16 @@

Policy Analyzer API

Instance Methods

+

+ folders() +

+

Returns the folders Resource.

+ +

+ organizations() +

+

Returns the organizations Resource.

+

projects()

diff --git a/docs/dyn/policyanalyzer_v1beta1.organizations.html b/docs/dyn/policyanalyzer_v1beta1.organizations.html new file mode 100644 index 00000000000..bc821dfe914 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.organizations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations

+

Instance Methods

+

+ locations() +

+

Returns the locations Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.activities.html new file mode 100644 index 00000000000..9fe6bcb730a --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.activities.html @@ -0,0 +1,141 @@ + + + +

Policy Analyzer API . organizations . locations . activityTypes . activities

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Queries policy activities on GCP resources.

+

+ query_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ query(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Queries policy activities on GCP resources.
+
+Args:
+  parent: string, Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to GCP Locations: https://cloud.google.com/about/locations/ (required)
+  filter: string, Optional. Optional filter expression to restrict the activities returned. Supported filters are: - service_account_last_authn.full_resource_name {=} - service_account_key_last_authn.full_resource_name {=} 
+  pageSize: integer, Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.
+  pageToken: string, Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response to the `QueryActivity` method.
+  "activities": [ # The set of activities that match the filter included in the request.
+    { # Represents Activity on a GCP resource over specific observation period.
+      "activity": { # A struct of custom fields to explain the activity.
+        "a_key": "", # Properties of the object.
+      },
+      "activityType": "A String", # The type of the activity.
+      "fullResourceName": "A String", # The full resource name that identifies the resource. For examples of full resource names for Google Cloud services, see https://cloud.google.com/iam/help/troubleshooter/full-resource-names.
+      "observationPeriod": { # Represents data observation period. # The data observation period to build the activity.
+        "endTime": "A String", # The observation end time.
+        "startTime": "A String", # The observation start time.
+      },
+    },
+  ],
+  "nextPageToken": "A String", # If there might be more results than those appearing in this response, then `nextPageToken` is included. To get the next set of results, call this method again using the value of `nextPageToken` as `pageToken`.
+}
+
+ +
+ query_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.html b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.html new file mode 100644 index 00000000000..c6629cefd1c --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.activityTypes.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations . locations . activityTypes

+

Instance Methods

+

+ activities() +

+

Returns the activities Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.organizations.locations.html b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.html new file mode 100644 index 00000000000..257dbf4c8c3 --- /dev/null +++ b/docs/dyn/policyanalyzer_v1beta1.organizations.locations.html @@ -0,0 +1,91 @@ + + + +

Policy Analyzer API . organizations . locations

+

Instance Methods

+

+ activityTypes() +

+

Returns the activityTypes Resource.

+ +

+ close()

+

Close httplib2 connections.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ + \ No newline at end of file diff --git a/docs/dyn/policyanalyzer_v1beta1.projects.locations.activityTypes.activities.html b/docs/dyn/policyanalyzer_v1beta1.projects.locations.activityTypes.activities.html index 1ec0e316165..c66d7a65204 100644 --- a/docs/dyn/policyanalyzer_v1beta1.projects.locations.activityTypes.activities.html +++ b/docs/dyn/policyanalyzer_v1beta1.projects.locations.activityTypes.activities.html @@ -108,7 +108,7 @@

Method Details

{ # Response to the `QueryActivity` method. "activities": [ # The set of activities that match the filter included in the request. - { + { # Represents Activity on a GCP resource over specific observation period. "activity": { # A struct of custom fields to explain the activity. "a_key": "", # Properties of the object. }, diff --git a/docs/dyn/recaptchaenterprise_v1.projects.assessments.html b/docs/dyn/recaptchaenterprise_v1.projects.assessments.html index 93e5c784e4e..e0098b8a96c 100644 --- a/docs/dyn/recaptchaenterprise_v1.projects.assessments.html +++ b/docs/dyn/recaptchaenterprise_v1.projects.assessments.html @@ -303,11 +303,11 @@

Method Details

}, "name": "A String", # Output only. Identifier. The resource name for the Assessment in the format `projects/{project}/assessments/{assessment}`. "phoneFraudAssessment": { # Assessment for Phone Fraud # Output only. Assessment returned when a site key, a token, and a phone number as `user_id` are provided. Account defender and SMS toll fraud protection need to be enabled. - "smsTollFraudVerdict": { # Information about sms toll fraud # Output only. Assessment of this phone event for risk of sms toll fraud. + "smsTollFraudVerdict": { # Information about SMS toll fraud. # Output only. Assessment of this phone event for risk of SMS toll fraud. "reasons": [ # Output only. Reasons contributing to the SMS toll fraud verdict. "A String", ], - "risk": 3.14, # Output only. Probability of an sms event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest). + "risk": 3.14, # Output only. Probability of an SMS event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest). }, }, "privatePasswordLeakVerification": { # Private password leak verification info. # Optional. The private password leak verification field contains the parameters that are used to to check for leaks privately without sharing user credentials. @@ -515,11 +515,11 @@

Method Details

}, "name": "A String", # Output only. Identifier. The resource name for the Assessment in the format `projects/{project}/assessments/{assessment}`. "phoneFraudAssessment": { # Assessment for Phone Fraud # Output only. Assessment returned when a site key, a token, and a phone number as `user_id` are provided. Account defender and SMS toll fraud protection need to be enabled. - "smsTollFraudVerdict": { # Information about sms toll fraud # Output only. Assessment of this phone event for risk of sms toll fraud. + "smsTollFraudVerdict": { # Information about SMS toll fraud. # Output only. Assessment of this phone event for risk of SMS toll fraud. "reasons": [ # Output only. Reasons contributing to the SMS toll fraud verdict. "A String", ], - "risk": 3.14, # Output only. Probability of an sms event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest). + "risk": 3.14, # Output only. Probability of an SMS event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest). }, }, "privatePasswordLeakVerification": { # Private password leak verification info. # Optional. The private password leak verification field contains the parameters that are used to to check for leaks privately without sharing user credentials. diff --git a/docs/dyn/run_v1.namespaces.configurations.html b/docs/dyn/run_v1.namespaces.configurations.html index 3b68b8efa22..c5f63e9536e 100644 --- a/docs/dyn/run_v1.namespaces.configurations.html +++ b/docs/dyn/run_v1.namespaces.configurations.html @@ -82,7 +82,7 @@

Instance Methods

Get information about a configuration.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List configurations.

+

List configurations. Results are sorted by creation time, descending.

Method Details

close() @@ -435,7 +435,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List configurations.
+  
List configurations. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the configurations should be listed. For Cloud Run, replace {namespace_id} with the project ID or number. (required)
diff --git a/docs/dyn/run_v1.namespaces.executions.html b/docs/dyn/run_v1.namespaces.executions.html
index dc8983d642e..b75b4548f0d 100644
--- a/docs/dyn/run_v1.namespaces.executions.html
+++ b/docs/dyn/run_v1.namespaces.executions.html
@@ -88,7 +88,7 @@ 

Instance Methods

Get information about an execution.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List executions.

+

List executions. Results are sorted by creation time, descending.

Method Details

cancel(name, body=None, x__xgafv=None) @@ -767,7 +767,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List executions.
+  
List executions. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The namespace from which the executions should be listed. Replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.namespaces.jobs.html b/docs/dyn/run_v1.namespaces.jobs.html
index 12fce3293f4..838626849b6 100644
--- a/docs/dyn/run_v1.namespaces.jobs.html
+++ b/docs/dyn/run_v1.namespaces.jobs.html
@@ -88,7 +88,7 @@ 

Instance Methods

Get information about a job.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List jobs.

+

List jobs. Results are sorted by creation time, descending.

replaceJob(name, body=None, x__xgafv=None)

Replace a job. Only the spec and metadata labels and annotations are modifiable. After the Replace request, Cloud Run will work to make the 'status' match the requested 'spec'. May provide metadata.resourceVersion to enforce update from last read for optimistic concurrency control.

@@ -1173,7 +1173,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List jobs.
+  
List jobs. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The namespace from which the jobs should be listed. Replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.namespaces.revisions.html b/docs/dyn/run_v1.namespaces.revisions.html
index c60d6a8557f..2f264052d4e 100644
--- a/docs/dyn/run_v1.namespaces.revisions.html
+++ b/docs/dyn/run_v1.namespaces.revisions.html
@@ -85,7 +85,7 @@ 

Instance Methods

Get information about a revision.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List revisions.

+

List revisions. Results are sorted by creation time, descending.

Method Details

close() @@ -449,7 +449,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List revisions.
+  
List revisions. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the revisions should be listed. For Cloud Run (fully managed), replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.namespaces.routes.html b/docs/dyn/run_v1.namespaces.routes.html
index fb6ade8b5ef..6bff9443da8 100644
--- a/docs/dyn/run_v1.namespaces.routes.html
+++ b/docs/dyn/run_v1.namespaces.routes.html
@@ -82,7 +82,7 @@ 

Instance Methods

Get information about a route.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List routes.

+

List routes. Results are sorted by creation time, descending.

Method Details

close() @@ -182,7 +182,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List routes.
+  
List routes. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the routes should be listed. For Cloud Run (fully managed), replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.namespaces.services.html b/docs/dyn/run_v1.namespaces.services.html
index a48e2fe3de9..b7de189b0bc 100644
--- a/docs/dyn/run_v1.namespaces.services.html
+++ b/docs/dyn/run_v1.namespaces.services.html
@@ -88,7 +88,7 @@ 

Instance Methods

Gets information about a service.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

Lists services for the given project and region.

+

Lists services for the given project and region. Results are sorted by creation time, descending.

replaceService(name, body=None, dryRun=None, x__xgafv=None)

Replaces a service. Only the spec and metadata labels and annotations are modifiable. After the Update request, Cloud Run will work to make the 'status' match the requested 'spec'. May provide metadata.resourceVersion to enforce update from last read for optimistic concurrency control.

@@ -1238,7 +1238,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
Lists services for the given project and region.
+  
Lists services for the given project and region. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The parent from where the resources should be listed. In Cloud Run, it may be one of the following: * `{project_id_or_number}` * `namespaces/{project_id_or_number}` * `namespaces/{project_id_or_number}/services` * `projects/{project_id_or_number}/locations/{region}` * `projects/{project_id_or_number}/regions/{region}` (required)
diff --git a/docs/dyn/run_v1.projects.locations.configurations.html b/docs/dyn/run_v1.projects.locations.configurations.html
index f2afbe94585..797bfb8b8f1 100644
--- a/docs/dyn/run_v1.projects.locations.configurations.html
+++ b/docs/dyn/run_v1.projects.locations.configurations.html
@@ -82,7 +82,7 @@ 

Instance Methods

Get information about a configuration.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List configurations.

+

List configurations. Results are sorted by creation time, descending.

Method Details

close() @@ -435,7 +435,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List configurations.
+  
List configurations. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the configurations should be listed. For Cloud Run, replace {namespace_id} with the project ID or number. (required)
diff --git a/docs/dyn/run_v1.projects.locations.revisions.html b/docs/dyn/run_v1.projects.locations.revisions.html
index 054d7b09e1c..4b3c22014b3 100644
--- a/docs/dyn/run_v1.projects.locations.revisions.html
+++ b/docs/dyn/run_v1.projects.locations.revisions.html
@@ -85,7 +85,7 @@ 

Instance Methods

Get information about a revision.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List revisions.

+

List revisions. Results are sorted by creation time, descending.

Method Details

close() @@ -449,7 +449,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List revisions.
+  
List revisions. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the revisions should be listed. For Cloud Run (fully managed), replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.projects.locations.routes.html b/docs/dyn/run_v1.projects.locations.routes.html
index d2a4ee4a281..888a1b0b0a7 100644
--- a/docs/dyn/run_v1.projects.locations.routes.html
+++ b/docs/dyn/run_v1.projects.locations.routes.html
@@ -82,7 +82,7 @@ 

Instance Methods

Get information about a route.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

List routes.

+

List routes. Results are sorted by creation time, descending.

Method Details

close() @@ -182,7 +182,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
List routes.
+  
List routes. Results are sorted by creation time, descending.
 
 Args:
   parent: string, The namespace from which the routes should be listed. For Cloud Run (fully managed), replace {namespace} with the project ID or number. It takes the form namespaces/{namespace}. For example: namespaces/PROJECT_ID (required)
diff --git a/docs/dyn/run_v1.projects.locations.services.html b/docs/dyn/run_v1.projects.locations.services.html
index 79652accf05..707abb9d638 100644
--- a/docs/dyn/run_v1.projects.locations.services.html
+++ b/docs/dyn/run_v1.projects.locations.services.html
@@ -91,7 +91,7 @@ 

Instance Methods

Gets the IAM Access Control policy currently in effect for the given Cloud Run service. This result does not include any inherited policies.

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None)

-

Lists services for the given project and region.

+

Lists services for the given project and region. Results are sorted by creation time, descending.

replaceService(name, body=None, dryRun=None, x__xgafv=None)

Replaces a service. Only the spec and metadata labels and annotations are modifiable. After the Update request, Cloud Run will work to make the 'status' match the requested 'spec'. May provide metadata.resourceVersion to enforce update from last read for optimistic concurrency control.

@@ -1295,7 +1295,7 @@

Method Details

list(parent, continue=None, fieldSelector=None, includeUninitialized=None, labelSelector=None, limit=None, resourceVersion=None, watch=None, x__xgafv=None) -
Lists services for the given project and region.
+  
Lists services for the given project and region. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The parent from where the resources should be listed. In Cloud Run, it may be one of the following: * `{project_id_or_number}` * `namespaces/{project_id_or_number}` * `namespaces/{project_id_or_number}/services` * `projects/{project_id_or_number}/locations/{region}` * `projects/{project_id_or_number}/regions/{region}` (required)
diff --git a/docs/dyn/run_v2.projects.locations.jobs.executions.html b/docs/dyn/run_v2.projects.locations.jobs.executions.html
index 9df4204930c..05f4fda2878 100644
--- a/docs/dyn/run_v2.projects.locations.jobs.executions.html
+++ b/docs/dyn/run_v2.projects.locations.jobs.executions.html
@@ -96,7 +96,7 @@ 

Instance Methods

Gets information about an Execution.

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None)

-

Lists Executions from a Job.

+

Lists Executions from a Job. Results are sorted by creation time, descending.

list_next()

Retrieves the next page of results.

@@ -430,7 +430,7 @@

Method Details

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None) -
Lists Executions from a Job.
+  
Lists Executions from a Job. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The Execution from which the Executions should be listed. To list all Executions across Jobs, use "-" instead of Job name. Format: `projects/{project}/locations/{location}/jobs/{job}`, where `{project}` can be project id or number. (required)
diff --git a/docs/dyn/run_v2.projects.locations.jobs.html b/docs/dyn/run_v2.projects.locations.jobs.html
index 9014cff87c0..f7cf7f8dfc3 100644
--- a/docs/dyn/run_v2.projects.locations.jobs.html
+++ b/docs/dyn/run_v2.projects.locations.jobs.html
@@ -96,7 +96,7 @@ 

Instance Methods

Gets the IAM Access Control policy currently in effect for the given Job. This result does not include any inherited policies.

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None)

-

Lists Jobs.

+

Lists Jobs. Results are sorted by creation time, descending.

list_next()

Retrieves the next page of results.

@@ -170,6 +170,7 @@

Method Details

"name": "A String", # The fully qualified name of this Job. Format: projects/{project}/locations/{location}/jobs/{job} "observedGeneration": "A String", # Output only. The generation of this Job. See comments in `reconciling` for additional information on reconciliation process in Cloud Run. "reconciling": True or False, # Output only. Returns true if the Job is currently being acted upon by the system to bring it into the desired state. When a new Job is created, or an existing one is updated, Cloud Run will asynchronously perform all necessary steps to bring the Job to the desired state. This process is called reconciliation. While reconciliation is in process, `observed_generation` and `latest_succeeded_execution`, will have transient values that might mismatch the intended state: Once reconciliation is over (and this field is false), there are two possible outcomes: reconciliation succeeded and the state matches the Job, or there was an error, and reconciliation failed. This state can be found in `terminal_condition.state`. If reconciliation succeeded, the following fields will match: `observed_generation` and `generation`, `latest_succeeded_execution` and `latest_created_execution`. If reconciliation failed, `observed_generation` and `latest_succeeded_execution` will have the state of the last succeeded execution or empty for newly created Job. Additional information on the failure can be found in `terminal_condition` and `conditions`. + "runExecutionToken": "A String", # A unique string used as a suffix for creating a new execution. The Job will become ready when the execution is successfully completed. The sum of job name and token length must be fewer than 63 characters. "satisfiesPzs": True or False, # Output only. Reserved for future use. "startExecutionToken": "A String", # A unique string used as a suffix creating a new execution. The Job will become ready when the execution is successfully started. The sum of job name and token length must be fewer than 63 characters. "template": { # ExecutionTemplate describes the data an execution should have when created from a template. # Required. The template used to create executions for this Job. @@ -468,6 +469,7 @@

Method Details

"name": "A String", # The fully qualified name of this Job. Format: projects/{project}/locations/{location}/jobs/{job} "observedGeneration": "A String", # Output only. The generation of this Job. See comments in `reconciling` for additional information on reconciliation process in Cloud Run. "reconciling": True or False, # Output only. Returns true if the Job is currently being acted upon by the system to bring it into the desired state. When a new Job is created, or an existing one is updated, Cloud Run will asynchronously perform all necessary steps to bring the Job to the desired state. This process is called reconciliation. While reconciliation is in process, `observed_generation` and `latest_succeeded_execution`, will have transient values that might mismatch the intended state: Once reconciliation is over (and this field is false), there are two possible outcomes: reconciliation succeeded and the state matches the Job, or there was an error, and reconciliation failed. This state can be found in `terminal_condition.state`. If reconciliation succeeded, the following fields will match: `observed_generation` and `generation`, `latest_succeeded_execution` and `latest_created_execution`. If reconciliation failed, `observed_generation` and `latest_succeeded_execution` will have the state of the last succeeded execution or empty for newly created Job. Additional information on the failure can be found in `terminal_condition` and `conditions`. + "runExecutionToken": "A String", # A unique string used as a suffix for creating a new execution. The Job will become ready when the execution is successfully completed. The sum of job name and token length must be fewer than 63 characters. "satisfiesPzs": True or False, # Output only. Reserved for future use. "startExecutionToken": "A String", # A unique string used as a suffix creating a new execution. The Job will become ready when the execution is successfully started. The sum of job name and token length must be fewer than 63 characters. "template": { # ExecutionTemplate describes the data an execution should have when created from a template. # Required. The template used to create executions for this Job. @@ -692,7 +694,7 @@

Method Details

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None) -
Lists Jobs.
+  
Lists Jobs. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The location and project to list resources on. Format: projects/{project}/locations/{location}, where {project} can be project id or number. (required)
@@ -752,6 +754,7 @@ 

Method Details

"name": "A String", # The fully qualified name of this Job. Format: projects/{project}/locations/{location}/jobs/{job} "observedGeneration": "A String", # Output only. The generation of this Job. See comments in `reconciling` for additional information on reconciliation process in Cloud Run. "reconciling": True or False, # Output only. Returns true if the Job is currently being acted upon by the system to bring it into the desired state. When a new Job is created, or an existing one is updated, Cloud Run will asynchronously perform all necessary steps to bring the Job to the desired state. This process is called reconciliation. While reconciliation is in process, `observed_generation` and `latest_succeeded_execution`, will have transient values that might mismatch the intended state: Once reconciliation is over (and this field is false), there are two possible outcomes: reconciliation succeeded and the state matches the Job, or there was an error, and reconciliation failed. This state can be found in `terminal_condition.state`. If reconciliation succeeded, the following fields will match: `observed_generation` and `generation`, `latest_succeeded_execution` and `latest_created_execution`. If reconciliation failed, `observed_generation` and `latest_succeeded_execution` will have the state of the last succeeded execution or empty for newly created Job. Additional information on the failure can be found in `terminal_condition` and `conditions`. + "runExecutionToken": "A String", # A unique string used as a suffix for creating a new execution. The Job will become ready when the execution is successfully completed. The sum of job name and token length must be fewer than 63 characters. "satisfiesPzs": True or False, # Output only. Reserved for future use. "startExecutionToken": "A String", # A unique string used as a suffix creating a new execution. The Job will become ready when the execution is successfully started. The sum of job name and token length must be fewer than 63 characters. "template": { # ExecutionTemplate describes the data an execution should have when created from a template. # Required. The template used to create executions for this Job. @@ -995,6 +998,7 @@

Method Details

"name": "A String", # The fully qualified name of this Job. Format: projects/{project}/locations/{location}/jobs/{job} "observedGeneration": "A String", # Output only. The generation of this Job. See comments in `reconciling` for additional information on reconciliation process in Cloud Run. "reconciling": True or False, # Output only. Returns true if the Job is currently being acted upon by the system to bring it into the desired state. When a new Job is created, or an existing one is updated, Cloud Run will asynchronously perform all necessary steps to bring the Job to the desired state. This process is called reconciliation. While reconciliation is in process, `observed_generation` and `latest_succeeded_execution`, will have transient values that might mismatch the intended state: Once reconciliation is over (and this field is false), there are two possible outcomes: reconciliation succeeded and the state matches the Job, or there was an error, and reconciliation failed. This state can be found in `terminal_condition.state`. If reconciliation succeeded, the following fields will match: `observed_generation` and `generation`, `latest_succeeded_execution` and `latest_created_execution`. If reconciliation failed, `observed_generation` and `latest_succeeded_execution` will have the state of the last succeeded execution or empty for newly created Job. Additional information on the failure can be found in `terminal_condition` and `conditions`. + "runExecutionToken": "A String", # A unique string used as a suffix for creating a new execution. The Job will become ready when the execution is successfully completed. The sum of job name and token length must be fewer than 63 characters. "satisfiesPzs": True or False, # Output only. Reserved for future use. "startExecutionToken": "A String", # A unique string used as a suffix creating a new execution. The Job will become ready when the execution is successfully started. The sum of job name and token length must be fewer than 63 characters. "template": { # ExecutionTemplate describes the data an execution should have when created from a template. # Required. The template used to create executions for this Job. diff --git a/docs/dyn/run_v2.projects.locations.services.html b/docs/dyn/run_v2.projects.locations.services.html index baa19aa4888..4b9341caaad 100644 --- a/docs/dyn/run_v2.projects.locations.services.html +++ b/docs/dyn/run_v2.projects.locations.services.html @@ -96,7 +96,7 @@

Instance Methods

Gets the IAM Access Control policy currently in effect for the given Cloud Run Service. This result does not include any inherited policies.

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None)

-

Lists Services.

+

Lists Services. Results are sorted by creation time, descending.

list_next()

Retrieves the next page of results.

@@ -745,7 +745,7 @@

Method Details

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None) -
Lists Services.
+  
Lists Services. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The location and project to list resources on. Location must be a valid Google Cloud region, and cannot be the "-" wildcard. Format: projects/{project}/locations/{location}, where {project} can be project id or number. (required)
diff --git a/docs/dyn/run_v2.projects.locations.services.revisions.html b/docs/dyn/run_v2.projects.locations.services.revisions.html
index 06f4cf63232..3e999f3f886 100644
--- a/docs/dyn/run_v2.projects.locations.services.revisions.html
+++ b/docs/dyn/run_v2.projects.locations.services.revisions.html
@@ -88,7 +88,7 @@ 

Instance Methods

Gets information about a Revision.

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None)

-

Lists Revisions from a given Service, or from a given location.

+

Lists Revisions from a given Service, or from a given location. Results are sorted by creation time, descending.

list_next()

Retrieves the next page of results.

@@ -381,7 +381,7 @@

Method Details

list(parent, pageSize=None, pageToken=None, showDeleted=None, x__xgafv=None) -
Lists Revisions from a given Service, or from a given location.
+  
Lists Revisions from a given Service, or from a given location. Results are sorted by creation time, descending.
 
 Args:
   parent: string, Required. The Service from which the Revisions should be listed. To list all Revisions across Services, use "-" instead of Service name. Format: projects/{project}/locations/{location}/services/{service} (required)
diff --git a/docs/dyn/spanner_v1.projects.instanceConfigs.html b/docs/dyn/spanner_v1.projects.instanceConfigs.html
index 4f502b53e44..98554ed140f 100644
--- a/docs/dyn/spanner_v1.projects.instanceConfigs.html
+++ b/docs/dyn/spanner_v1.projects.instanceConfigs.html
@@ -141,6 +141,7 @@ 

Method Details

"type": "A String", # The type of replica. }, ], + "quorumType": "A String", # Output only. The `QuorumType` of the instance configuration. "reconciling": True or False, # Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config. "replicas": [ # The geographic placement of nodes in this instance configuration and their replication properties. { @@ -239,6 +240,7 @@

Method Details

"type": "A String", # The type of replica. }, ], + "quorumType": "A String", # Output only. The `QuorumType` of the instance configuration. "reconciling": True or False, # Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config. "replicas": [ # The geographic placement of nodes in this instance configuration and their replication properties. { @@ -290,6 +292,7 @@

Method Details

"type": "A String", # The type of replica. }, ], + "quorumType": "A String", # Output only. The `QuorumType` of the instance configuration. "reconciling": True or False, # Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config. "replicas": [ # The geographic placement of nodes in this instance configuration and their replication properties. { @@ -350,6 +353,7 @@

Method Details

"type": "A String", # The type of replica. }, ], + "quorumType": "A String", # Output only. The `QuorumType` of the instance configuration. "reconciling": True or False, # Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config. "replicas": [ # The geographic placement of nodes in this instance configuration and their replication properties. { diff --git a/docs/dyn/spanner_v1.projects.instances.databases.html b/docs/dyn/spanner_v1.projects.instances.databases.html index bd44b4bb0f4..c7d4cb94497 100644 --- a/docs/dyn/spanner_v1.projects.instances.databases.html +++ b/docs/dyn/spanner_v1.projects.instances.databases.html @@ -89,6 +89,9 @@

Instance Methods

Returns the sessions Resource.

+

+ changequorum(name, body=None, x__xgafv=None)

+

ChangeQuorum is strictly restricted to databases that use dual region instance configurations. Initiates a background operation to change quorum a database from dual-region mode to single-region mode and vice versa. The returned long-running operation will have a name of the format `projects//instances//databases//operations/` and can be used to track execution of the ChangeQuorum. The metadata field type is ChangeQuorumMetadata. Authorization requires `spanner.databases.changequorum` permission on the resource database.

close()

Close httplib2 connections.

@@ -132,6 +135,56 @@

Instance Methods

updateDdl(database, body=None, x__xgafv=None)

Updates the schema of a Cloud Spanner database by creating/altering/dropping tables, columns, indexes, etc. The returned long-running operation will have a name of the format `/operations/` and can be used to track execution of the schema change(s). The metadata field type is UpdateDatabaseDdlMetadata. The operation has no response.

Method Details

+
+ changequorum(name, body=None, x__xgafv=None) +
ChangeQuorum is strictly restricted to databases that use dual region instance configurations. Initiates a background operation to change quorum a database from dual-region mode to single-region mode and vice versa. The returned long-running operation will have a name of the format `projects//instances//databases//operations/` and can be used to track execution of the ChangeQuorum. The metadata field type is ChangeQuorumMetadata. Authorization requires `spanner.databases.changequorum` permission on the resource database.
+
+Args:
+  name: string, Required. Name of the database in which to apply the ChangeQuorum. Values are of the form `projects//instances//databases/`. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # The request for ChangeQuorum.
+  "etag": "A String", # Optional. The etag is the hash of the QuorumInfo. The ChangeQuorum operation will only be performed if the etag matches that of the QuorumInfo in the current database resource. Otherwise the API will return an `ABORTED` error. The etag is used for optimistic concurrency control as a way to help prevent simultaneous change quorum requests that could create a race condition.
+  "name": "A String", # Required. Name of the database in which to apply the ChangeQuorum. Values are of the form `projects//instances//databases/`.
+  "quorumType": { # Information about the database quorum type. this applies only for dual region instance configs. # Required. The type of this Quorum.
+    "dualRegion": { # Message type for a dual-region quorum. Currently this type has no options. # Dual region quorum type.
+    },
+    "singleRegion": { # Message type for a single-region quorum. # Single region quorum type.
+      "servingLocation": "A String", # Required. The location of the serving region, e.g. "us-central1". The location must be one of the regions within the dual region instance configuration of your database. The list of valid locations is available via [GetInstanceConfig[InstanceAdmin.GetInstanceConfig] API. This should only be used if you plan to change quorum in single-region quorum type.
+    },
+  },
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+
close()
Close httplib2 connections.
@@ -250,6 +303,18 @@

Method Details

}, ], "name": "A String", # Required. The name of the database. Values are of the form `projects//instances//databases/`, where `` is as specified in the `CREATE DATABASE` statement. This name can be passed to other API methods to identify the database. + "quorumInfo": { # Information about the dual region quorum. # Output only. Applicable only for databases that use dual region instance configurations. Contains information about the quorum. + "etag": "A String", # Output only. The etag is used for optimistic concurrency control as a way to help prevent simultaneous ChangeQuorum requests that could create a race condition. + "initiator": "A String", # Output only. Whether this ChangeQuorum is a Google or User initiated. + "quorumType": { # Information about the database quorum type. this applies only for dual region instance configs. # Output only. The type of this quorum. See QuorumType for more information about quorum type specifications. + "dualRegion": { # Message type for a dual-region quorum. Currently this type has no options. # Dual region quorum type. + }, + "singleRegion": { # Message type for a single-region quorum. # Single region quorum type. + "servingLocation": "A String", # Required. The location of the serving region, e.g. "us-central1". The location must be one of the regions within the dual region instance configuration of your database. The list of valid locations is available via [GetInstanceConfig[InstanceAdmin.GetInstanceConfig] API. This should only be used if you plan to change quorum in single-region quorum type. + }, + }, + "startTime": "A String", # Output only. The timestamp when the request was triggered. + }, "reconciling": True or False, # Output only. If true, the database is being updated. If false, there are no ongoing update operations for the database. "restoreInfo": { # Information about the database restore. # Output only. Applicable only for restored databases. Contains information about the restore source. "backupInfo": { # Information about a backup. # Information about the backup used to restore the database. The backup may no longer exist. @@ -581,6 +646,18 @@

Method Details

}, ], "name": "A String", # Required. The name of the database. Values are of the form `projects//instances//databases/`, where `` is as specified in the `CREATE DATABASE` statement. This name can be passed to other API methods to identify the database. + "quorumInfo": { # Information about the dual region quorum. # Output only. Applicable only for databases that use dual region instance configurations. Contains information about the quorum. + "etag": "A String", # Output only. The etag is used for optimistic concurrency control as a way to help prevent simultaneous ChangeQuorum requests that could create a race condition. + "initiator": "A String", # Output only. Whether this ChangeQuorum is a Google or User initiated. + "quorumType": { # Information about the database quorum type. this applies only for dual region instance configs. # Output only. The type of this quorum. See QuorumType for more information about quorum type specifications. + "dualRegion": { # Message type for a dual-region quorum. Currently this type has no options. # Dual region quorum type. + }, + "singleRegion": { # Message type for a single-region quorum. # Single region quorum type. + "servingLocation": "A String", # Required. The location of the serving region, e.g. "us-central1". The location must be one of the regions within the dual region instance configuration of your database. The list of valid locations is available via [GetInstanceConfig[InstanceAdmin.GetInstanceConfig] API. This should only be used if you plan to change quorum in single-region quorum type. + }, + }, + "startTime": "A String", # Output only. The timestamp when the request was triggered. + }, "reconciling": True or False, # Output only. If true, the database is being updated. If false, there are no ongoing update operations for the database. "restoreInfo": { # Information about the database restore. # Output only. Applicable only for restored databases. Contains information about the restore source. "backupInfo": { # Information about a backup. # Information about the backup used to restore the database. The backup may no longer exist. @@ -650,6 +727,18 @@

Method Details

}, ], "name": "A String", # Required. The name of the database. Values are of the form `projects//instances//databases/`, where `` is as specified in the `CREATE DATABASE` statement. This name can be passed to other API methods to identify the database. + "quorumInfo": { # Information about the dual region quorum. # Output only. Applicable only for databases that use dual region instance configurations. Contains information about the quorum. + "etag": "A String", # Output only. The etag is used for optimistic concurrency control as a way to help prevent simultaneous ChangeQuorum requests that could create a race condition. + "initiator": "A String", # Output only. Whether this ChangeQuorum is a Google or User initiated. + "quorumType": { # Information about the database quorum type. this applies only for dual region instance configs. # Output only. The type of this quorum. See QuorumType for more information about quorum type specifications. + "dualRegion": { # Message type for a dual-region quorum. Currently this type has no options. # Dual region quorum type. + }, + "singleRegion": { # Message type for a single-region quorum. # Single region quorum type. + "servingLocation": "A String", # Required. The location of the serving region, e.g. "us-central1". The location must be one of the regions within the dual region instance configuration of your database. The list of valid locations is available via [GetInstanceConfig[InstanceAdmin.GetInstanceConfig] API. This should only be used if you plan to change quorum in single-region quorum type. + }, + }, + "startTime": "A String", # Output only. The timestamp when the request was triggered. + }, "reconciling": True or False, # Output only. If true, the database is being updated. If false, there are no ongoing update operations for the database. "restoreInfo": { # Information about the database restore. # Output only. Applicable only for restored databases. Contains information about the restore source. "backupInfo": { # Information about a backup. # Information about the backup used to restore the database. The backup may no longer exist. diff --git a/docs/dyn/spanner_v1.projects.instances.databases.sessions.html b/docs/dyn/spanner_v1.projects.instances.databases.sessions.html index 5f68b933c71..7ba73ee4d6c 100644 --- a/docs/dyn/spanner_v1.projects.instances.databases.sessions.html +++ b/docs/dyn/spanner_v1.projects.instances.databases.sessions.html @@ -308,7 +308,7 @@

Method Details

The object takes the form of: { # The request for BeginTransaction. - "options": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Required. Options for the new transaction. + "options": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Required. Options for the new transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -442,7 +442,7 @@

Method Details

"transactionTag": "A String", # A tag used for statistics collection about this transaction. Both request_tag and transaction_tag can be specified for a read or query that belongs to a transaction. The value of transaction_tag should be the same for all requests belonging to the same transaction. If this request doesn't belong to any transaction, transaction_tag will be ignored. Legal characters for `transaction_tag` values are all printable characters (ASCII 32 - 126) and the length of a transaction_tag is limited to 50 characters. Values that exceed this limit are truncated. Any leading underscore (_) characters will be removed from the string. }, "returnCommitStats": True or False, # If `true`, then statistics related to the transaction will be included in the CommitResponse. Default value is `false`. - "singleUseTransaction": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the `CommitRequest` is sent to Cloud Spanner more than once (for instance, due to retries in the application, or in the transport library), it is possible that the mutations are executed more than once. If this is undesirable, use BeginTransaction and Commit instead. + "singleUseTransaction": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the `CommitRequest` is sent to Cloud Spanner more than once (for instance, due to retries in the application, or in the transport library), it is possible that the mutations are executed more than once. If this is undesirable, use BeginTransaction and Commit instead. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -578,7 +578,7 @@

Method Details

}, ], "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # Required. The transaction to use. Must be a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -595,7 +595,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -775,7 +775,7 @@

Method Details

"seqno": "A String", # A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution. Required for DML statements. Ignored for queries. "sql": "A String", # Required. The SQL string. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing Partitioned DML transaction ID. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -792,7 +792,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -959,7 +959,7 @@

Method Details

"seqno": "A String", # A per-transaction sequence number used to identify this request. This field makes each request idempotent such that if the request is received multiple times, at most one will succeed. The sequence number must be monotonically increasing within the transaction. If a request arrives for the first time with an out-of-order sequence number, the transaction may be aborted. Replays of previously handled requests will yield the same response as the first execution. Required for DML statements. Ignored for queries. "sql": "A String", # Required. The SQL string. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # The transaction to use. For queries, if none is provided, the default is a temporary read-only transaction with strong concurrency. Standard DML statements require a read-write transaction. To protect against replays, single-use transactions are not supported. The caller must either supply an existing transaction ID or begin a new transaction. Partitioned DML requires an existing Partitioned DML transaction ID. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -976,7 +976,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1188,7 +1188,7 @@

Method Details

}, "sql": "A String", # Required. The query request to generate partitions for. The request will fail if the query is not root partitionable. For a query to be root partitionable, it needs to satisfy a few conditions. For example, if the query execution plan contains a distributed union operator, then it must be the first operator in the plan. For more information about other conditions, see [Read data in parallel](https://cloud.google.com/spanner/docs/reads#read_data_in_parallel). The query request must not contain DML commands, such as INSERT, UPDATE, or DELETE. Use ExecuteStreamingSql with a PartitionedDml transaction for large, partition-friendly DML operations. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # Read only snapshot transactions are supported, read/write and single use transactions are not. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1205,7 +1205,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1289,7 +1289,7 @@

Method Details

}, "table": "A String", # Required. The name of the table in the database to be read. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # Read only snapshot transactions are supported, read/write and single use transactions are not. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1306,7 +1306,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1414,7 +1414,7 @@

Method Details

"resumeToken": "A String", # If this request is resuming a previously interrupted read, `resume_token` should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new read to resume where the last read left off. The rest of the request parameters must exactly match the request that yielded this token. "table": "A String", # Required. The name of the table in the database to be read. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1431,7 +1431,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1627,7 +1627,7 @@

Method Details

"resumeToken": "A String", # If this request is resuming a previously interrupted read, `resume_token` should be copied from the last PartialResultSet yielded before the interruption. Doing this enables the new read to resume where the last read left off. The rest of the request parameters must exactly match the request that yielded this token. "table": "A String", # Required. The name of the table in the database to be read. "transaction": { # This message is used to select the transaction in which a Read or ExecuteSql call runs. See TransactionOptions for more information about transactions. # The transaction to use. If none is provided, the default is a temporary read-only transaction with strong concurrency. - "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. + "begin": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Begin a new transaction and execute this read or SQL query in it. The transaction ID of the new transaction is returned in ResultSetMetadata.transaction, which is a Transaction. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, @@ -1644,7 +1644,7 @@

Method Details

}, }, "id": "A String", # Execute the read or SQL query in a previously-started transaction. - "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. + "singleUse": { # Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table. # Execute the read or SQL query in a temporary transaction. This is the most efficient way to execute a transaction that consists of a single SQL query. "excludeTxnFromChangeStreams": True or False, # When `exclude_txn_from_change_streams` is set to `true`: * Modifications from this transaction will not be recorded in change streams with DDL option `allow_txn_exclusion=true` that are tracking columns modified by these transactions. * Modifications from this transaction will be recorded in change streams with DDL option `allow_txn_exclusion=false or not set` that are tracking columns modified by these transactions. When `exclude_txn_from_change_streams` is set to `false` or not set, Modifications from this transaction will be recorded in all change streams that are tracking columns modified by these transactions. `exclude_txn_from_change_streams` may only be specified for read-write or partitioned-dml transactions, otherwise the API will return an `INVALID_ARGUMENT` error. "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction. Authorization to begin a Partitioned DML transaction requires `spanner.databases.beginPartitionedDmlTransaction` permission on the `session` resource. }, diff --git a/docs/dyn/spanner_v1.projects.instances.html b/docs/dyn/spanner_v1.projects.instances.html index f431c4cbeca..25f61acaa6a 100644 --- a/docs/dyn/spanner_v1.projects.instances.html +++ b/docs/dyn/spanner_v1.projects.instances.html @@ -187,8 +187,8 @@

Method Details

"a_key": "A String", }, "name": "A String", # Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form `projects//instances/a-z*[a-z0-9]`. The final segment of the name must be between 2 and 64 characters in length. - "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. - "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. "state": "A String", # Output only. The current instance state. For CreateInstance, the state must be either omitted or set to `CREATING`. For UpdateInstance, the state must be either omitted or set to `READY`. "updateTime": "A String", # Output only. The time at which the instance was most recently updated. }, @@ -286,8 +286,8 @@

Method Details

"a_key": "A String", }, "name": "A String", # Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form `projects//instances/a-z*[a-z0-9]`. The final segment of the name must be between 2 and 64 characters in length. - "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. - "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. "state": "A String", # Output only. The current instance state. For CreateInstance, the state must be either omitted or set to `CREATING`. For UpdateInstance, the state must be either omitted or set to `READY`. "updateTime": "A String", # Output only. The time at which the instance was most recently updated. }
@@ -385,8 +385,8 @@

Method Details

"a_key": "A String", }, "name": "A String", # Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form `projects//instances/a-z*[a-z0-9]`. The final segment of the name must be between 2 and 64 characters in length. - "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. - "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. "state": "A String", # Output only. The current instance state. For CreateInstance, the state must be either omitted or set to `CREATING`. For UpdateInstance, the state must be either omitted or set to `READY`. "updateTime": "A String", # Output only. The time at which the instance was most recently updated. }, @@ -494,8 +494,8 @@

Method Details

"a_key": "A String", }, "name": "A String", # Required. A unique identifier for the instance, which cannot be changed after the instance is created. Values are of the form `projects//instances/a-z*[a-z0-9]`. The final segment of the name must be between 2 and 64 characters in length. - "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. - "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "nodeCount": 42, # The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. + "processingUnits": 42, # The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units. "state": "A String", # Output only. The current instance state. For CreateInstance, the state must be either omitted or set to `CREATING`. For UpdateInstance, the state must be either omitted or set to `READY`. "updateTime": "A String", # Output only. The time at which the instance was most recently updated. }, diff --git a/docs/dyn/spanner_v1.projects.instances.instancePartitions.html b/docs/dyn/spanner_v1.projects.instances.instancePartitions.html index ea298ee94bc..43ba05f6d0a 100644 --- a/docs/dyn/spanner_v1.projects.instances.instancePartitions.html +++ b/docs/dyn/spanner_v1.projects.instances.instancePartitions.html @@ -222,7 +222,7 @@

Method Details

Lists all instance partitions for the given instance.
 
 Args:
-  parent: string, Required. The instance whose instance partitions should be listed. Values are of the form `projects//instances/`. (required)
+  parent: string, Required. The instance whose instance partitions should be listed. Values are of the form `projects//instances/`. Use `{instance} = '-'` to list instance partitions for all Instances in a project, e.g., `projects/myproject/instances/-`. (required)
   instancePartitionDeadline: string, Optional. Deadline used while retrieving metadata for instance partitions. Instance partitions whose metadata cannot be retrieved within this deadline will be added to unreachable in ListInstancePartitionsResponse.
   pageSize: integer, Number of instance partitions to be returned in the response. If 0 or less, defaults to the server's maximum allowed page size.
   pageToken: string, If non-empty, `page_token` should contain a next_page_token from a previous ListInstancePartitionsResponse.
@@ -255,7 +255,7 @@ 

Method Details

}, ], "nextPageToken": "A String", # `next_page_token` can be sent in a subsequent ListInstancePartitions call to fetch more of the matching instance partitions. - "unreachable": [ # The list of unreachable instance partitions. It includes the names of instance partitions whose metadata could not be retrieved within instance_partition_deadline. + "unreachable": [ # The list of unreachable instances or instance partitions. It includes the names of instances or instance partitions whose metadata could not be retrieved within instance_partition_deadline. "A String", ], }
diff --git a/docs/dyn/versionhistory_v1.platforms.channels.versions.releases.html b/docs/dyn/versionhistory_v1.platforms.channels.versions.releases.html index fc99f7fc6c4..307b9761e0e 100644 --- a/docs/dyn/versionhistory_v1.platforms.channels.versions.releases.html +++ b/docs/dyn/versionhistory_v1.platforms.channels.versions.releases.html @@ -114,6 +114,7 @@

Method Details

"fraction": 3.14, # Rollout fraction. This fraction indicates the fraction of people that should receive this version in this release. If the fraction is not specified in ReleaseManager, the API will assume fraction is 1. "fractionGroup": "A String", # Rollout fraction group. Only fractions with the same fraction_group are statistically comparable: there may be non-fractional differences between different fraction groups. "name": "A String", # Release name. Format is "{product}/platforms/{platform}/channels/{channel}/versions/{version}/releases/{release}" + "pinnable": True or False, # Whether or not the release was available for version pinning. "serving": { # Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive). The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time. # Timestamp interval of when the release was live. If end_time is unspecified, the release is currently live. "endTime": "A String", # Optional. Exclusive end of the interval. If specified, a Timestamp matching this interval will have to be before the end. "startTime": "A String", # Optional. Inclusive start of the interval. If specified, a Timestamp matching this interval will have to be the same or after the start. diff --git a/docs/dyn/workflowexecutions_v1.projects.locations.workflows.executions.stepEntries.html b/docs/dyn/workflowexecutions_v1.projects.locations.workflows.executions.stepEntries.html index 2205251e72c..2593f500cdd 100644 --- a/docs/dyn/workflowexecutions_v1.projects.locations.workflows.executions.stepEntries.html +++ b/docs/dyn/workflowexecutions_v1.projects.locations.workflows.executions.stepEntries.html @@ -125,6 +125,7 @@

Method Details

"state": "A String", # Output only. The state of the step entry. "step": "A String", # Output only. The name of the step this step entry belongs to. "stepEntryMetadata": { # StepEntryMetadata contains metadata information about this step. # Output only. The StepEntryMetadata associated to this step. + "expectedIteration": "A String", # Expected iteration represents the expected number of iterations in the step's progress. "progressNumber": "A String", # Progress number represents the current state of the current progress. eg: A step entry represents the 4th iteration in a progress of PROGRESS_TYPE_FOR. "progressType": "A String", # Progress type of this step entry. "threadId": "A String", # Child thread id that this step entry belongs to. @@ -175,6 +176,7 @@

Method Details

"state": "A String", # Output only. The state of the step entry. "step": "A String", # Output only. The name of the step this step entry belongs to. "stepEntryMetadata": { # StepEntryMetadata contains metadata information about this step. # Output only. The StepEntryMetadata associated to this step. + "expectedIteration": "A String", # Expected iteration represents the expected number of iterations in the step's progress. "progressNumber": "A String", # Progress number represents the current state of the current progress. eg: A step entry represents the 4th iteration in a progress of PROGRESS_TYPE_FOR. "progressType": "A String", # Progress type of this step entry. "threadId": "A String", # Child thread id that this step entry belongs to. diff --git a/googleapiclient/discovery_cache/documents/abusiveexperiencereport.v1.json b/googleapiclient/discovery_cache/documents/abusiveexperiencereport.v1.json index c6ee65b1d75..71f5acf3779 100644 --- a/googleapiclient/discovery_cache/documents/abusiveexperiencereport.v1.json +++ b/googleapiclient/discovery_cache/documents/abusiveexperiencereport.v1.json @@ -139,7 +139,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://abusiveexperiencereport.googleapis.com/", "schemas": { "SiteSummaryResponse": { diff --git a/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json b/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json index 0d765833935..d93d5626155 100644 --- a/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json +++ b/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json @@ -115,7 +115,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://acceleratedmobilepageurl.googleapis.com/", "schemas": { "AmpUrl": { diff --git a/googleapiclient/discovery_cache/documents/accessapproval.v1.json b/googleapiclient/discovery_cache/documents/accessapproval.v1.json index cead8a3db53..9e139e3779b 100644 --- a/googleapiclient/discovery_cache/documents/accessapproval.v1.json +++ b/googleapiclient/discovery_cache/documents/accessapproval.v1.json @@ -913,7 +913,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://accessapproval.googleapis.com/", "schemas": { "AccessApprovalServiceAccount": { diff --git a/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json b/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json index 2866fe7ecdc..c94d18fbecf 100644 --- a/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json @@ -1290,7 +1290,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://accesscontextmanager.googleapis.com/", "schemas": { "AccessContextManagerOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/acmedns.v1.json b/googleapiclient/discovery_cache/documents/acmedns.v1.json index a5b9512b7b9..d2c4deca4e4 100644 --- a/googleapiclient/discovery_cache/documents/acmedns.v1.json +++ b/googleapiclient/discovery_cache/documents/acmedns.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://acmedns.googleapis.com/", "schemas": { "AcmeChallengeSet": { diff --git a/googleapiclient/discovery_cache/documents/addressvalidation.v1.json b/googleapiclient/discovery_cache/documents/addressvalidation.v1.json index 6d60535dd86..3c53557faed 100644 --- a/googleapiclient/discovery_cache/documents/addressvalidation.v1.json +++ b/googleapiclient/discovery_cache/documents/addressvalidation.v1.json @@ -151,7 +151,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://addressvalidation.googleapis.com/", "schemas": { "GoogleGeoTypeViewport": { diff --git a/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json b/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json index c11ee9ca619..877f8c9f08c 100644 --- a/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json @@ -3115,7 +3115,7 @@ } } }, -"revision": "20240523", +"revision": "20240603", "rootUrl": "https://adexchangebuyer.googleapis.com/", "schemas": { "AbsoluteDateRange": { diff --git a/googleapiclient/discovery_cache/documents/adexperiencereport.v1.json b/googleapiclient/discovery_cache/documents/adexperiencereport.v1.json index cb6df6aa476..c78d6ac8caa 100644 --- a/googleapiclient/discovery_cache/documents/adexperiencereport.v1.json +++ b/googleapiclient/discovery_cache/documents/adexperiencereport.v1.json @@ -139,7 +139,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://adexperiencereport.googleapis.com/", "schemas": { "PlatformSummary": { diff --git a/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json b/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json index 932d7e99324..56371d1152f 100644 --- a/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json @@ -272,7 +272,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Application": { diff --git a/googleapiclient/discovery_cache/documents/admin.directory_v1.json b/googleapiclient/discovery_cache/documents/admin.directory_v1.json index a9fde08eed8..c4ea893fed3 100644 --- a/googleapiclient/discovery_cache/documents/admin.directory_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.directory_v1.json @@ -4671,7 +4671,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Alias": { diff --git a/googleapiclient/discovery_cache/documents/admin.reports_v1.json b/googleapiclient/discovery_cache/documents/admin.reports_v1.json index b5aa5e169e9..cd13533b980 100644 --- a/googleapiclient/discovery_cache/documents/admin.reports_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.reports_v1.json @@ -626,7 +626,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Activities": { diff --git a/googleapiclient/discovery_cache/documents/admob.v1.json b/googleapiclient/discovery_cache/documents/admob.v1.json index 990eb319c7d..d20353bd3ca 100644 --- a/googleapiclient/discovery_cache/documents/admob.v1.json +++ b/googleapiclient/discovery_cache/documents/admob.v1.json @@ -321,7 +321,7 @@ } } }, -"revision": "20240522", +"revision": "20240603", "rootUrl": "https://admob.googleapis.com/", "schemas": { "AdUnit": { diff --git a/googleapiclient/discovery_cache/documents/admob.v1beta.json b/googleapiclient/discovery_cache/documents/admob.v1beta.json index 2b029bf8373..4176ed790f4 100644 --- a/googleapiclient/discovery_cache/documents/admob.v1beta.json +++ b/googleapiclient/discovery_cache/documents/admob.v1beta.json @@ -758,7 +758,7 @@ } } }, -"revision": "20240522", +"revision": "20240603", "rootUrl": "https://admob.googleapis.com/", "schemas": { "AdSource": { diff --git a/googleapiclient/discovery_cache/documents/adsense.v2.json b/googleapiclient/discovery_cache/documents/adsense.v2.json index 6b9c900d3fa..79963f485aa 100644 --- a/googleapiclient/discovery_cache/documents/adsense.v2.json +++ b/googleapiclient/discovery_cache/documents/adsense.v2.json @@ -1912,7 +1912,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://adsense.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json b/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json index 648546f1755..df8d8d1c2ca 100644 --- a/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json +++ b/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json @@ -412,7 +412,7 @@ } } }, -"revision": "20240519", +"revision": "20240602", "rootUrl": "https://advisorynotifications.googleapis.com/", "schemas": { "GoogleCloudAdvisorynotificationsV1Attachment": { diff --git a/googleapiclient/discovery_cache/documents/aiplatform.v1.json b/googleapiclient/discovery_cache/documents/aiplatform.v1.json index a33ccc3462d..c161881480a 100644 --- a/googleapiclient/discovery_cache/documents/aiplatform.v1.json +++ b/googleapiclient/discovery_cache/documents/aiplatform.v1.json @@ -3154,7 +3154,8 @@ "$ref": "GoogleCloudAiplatformV1DirectPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "directRawPredict": { @@ -3182,7 +3183,8 @@ "$ref": "GoogleCloudAiplatformV1DirectRawPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "explain": { @@ -3210,7 +3212,8 @@ "$ref": "GoogleCloudAiplatformV1ExplainResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "generateContent": { @@ -3238,7 +3241,8 @@ "$ref": "GoogleCloudAiplatformV1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "get": { @@ -3405,7 +3409,8 @@ "$ref": "GoogleCloudAiplatformV1PredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "rawPredict": { @@ -3433,7 +3438,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "serverStreamingPredict": { @@ -3461,7 +3467,8 @@ "$ref": "GoogleCloudAiplatformV1StreamingPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "streamGenerateContent": { @@ -3489,7 +3496,8 @@ "$ref": "GoogleCloudAiplatformV1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "streamRawPredict": { @@ -3517,7 +3525,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "undeployModel": { @@ -11275,6 +11284,40 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"patch": { +"description": "Updates a NotebookRuntimeTemplate.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/notebookRuntimeTemplates/{notebookRuntimeTemplatesId}", +"httpMethod": "PATCH", +"id": "aiplatform.projects.locations.notebookRuntimeTemplates.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The resource name of the NotebookRuntimeTemplate.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/notebookRuntimeTemplates/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Required. The update mask applies to the resource. For the `FieldMask` definition, see google.protobuf.FieldMask. Input format: `{paths: \"${updated_filed}\"}` Updatable fields: * `encryption_spec.kms_key_name`", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}", +"request": { +"$ref": "GoogleCloudAiplatformV1NotebookRuntimeTemplate" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1NotebookRuntimeTemplate" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "setIamPolicy": { "description": "Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/notebookRuntimeTemplates/{notebookRuntimeTemplatesId}:setIamPolicy", @@ -12476,7 +12519,8 @@ "$ref": "GoogleCloudAiplatformV1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "predict": { @@ -12504,7 +12548,8 @@ "$ref": "GoogleCloudAiplatformV1PredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "rawPredict": { @@ -12532,7 +12577,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "serverStreamingPredict": { @@ -12560,7 +12606,8 @@ "$ref": "GoogleCloudAiplatformV1StreamingPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "streamGenerateContent": { @@ -12588,7 +12635,8 @@ "$ref": "GoogleCloudAiplatformV1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "streamRawPredict": { @@ -12616,7 +12664,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] } } @@ -16238,121 +16287,9 @@ } } }, -"revision": "20240510", +"revision": "20240529", "rootUrl": "https://aiplatform.googleapis.com/", "schemas": { -"CloudAiLargeModelsVisionFilteredText": { -"description": "Details for filtered input text.", -"id": "CloudAiLargeModelsVisionFilteredText", -"properties": { -"category": { -"description": "Confidence level", -"enum": [ -"RAI_CATEGORY_UNSPECIFIED", -"OBSCENE", -"SEXUALLY_EXPLICIT", -"IDENTITY_ATTACK", -"VIOLENCE_ABUSE", -"CSAI", -"SPII", -"CELEBRITY", -"FACE_IMG", -"WATERMARK_IMG", -"MEMORIZATION_IMG", -"CSAI_IMG", -"PORN_IMG", -"VIOLENCE_IMG", -"CHILD_IMG", -"TOXIC", -"SENSITIVE_WORD", -"PERSON_IMG", -"ICA_IMG", -"SEXUAL_IMG", -"IU_IMG", -"RACY_IMG", -"PEDO_IMG", -"DEATH_HARM_TRAGEDY", -"HEALTH", -"FIREARMS_WEAPONS", -"RELIGIOUS_BELIEF", -"ILLICIT_DRUGS", -"WAR_CONFLICT", -"POLITICS", -"HATE_SYMBOL_IMG", -"CHILD_TEXT", -"DANGEROUS_CONTENT", -"RECITATION_TEXT", -"CELEBRITY_IMG", -"WATERMARK_IMG_REMOVAL" -], -"enumDescriptions": [ -"", -"", -"Porn", -"Hate", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"SafetyAttributes returned but not filtered on", -"", -"", -"", -"", -"", -"", -"End of list", -"", -"Text category from SafetyCat v3", -"", -"", -"Error message when user attempts to remove watermark from editing image" -], -"type": "string" -}, -"confidence": { -"description": "Filtered category", -"enum": [ -"CONFIDENCE_UNSPECIFIED", -"CONFIDENCE_LOW", -"CONFIDENCE_MEDIUM", -"CONFIDENCE_HIGH" -], -"enumDescriptions": [ -"", -"", -"", -"" -], -"type": "string" -}, -"prompt": { -"description": "Input prompt", -"type": "string" -}, -"score": { -"description": "Score for category", -"format": "double", -"type": "number" -} -}, -"type": "object" -}, "CloudAiLargeModelsVisionGenerateVideoResponse": { "description": "Generate video response.", "id": "CloudAiLargeModelsVisionGenerateVideoResponse", @@ -16364,10 +16301,6 @@ }, "type": "array" }, -"raiErrorMessage": { -"description": "Returns rai error message for filtered videos.", -"type": "string" -}, "raiMediaFilteredCount": { "description": "Returns if any videos were filtered due to RAI policies.", "format": "int32", @@ -16379,10 +16312,6 @@ "type": "string" }, "type": "array" -}, -"raiTextFilteredReason": { -"$ref": "CloudAiLargeModelsVisionFilteredText", -"description": "Returns filtered text rai info." } }, "type": "object" @@ -16494,6 +16423,13 @@ "CloudAiLargeModelsVisionRaiInfo": { "id": "CloudAiLargeModelsVisionRaiInfo", "properties": { +"detectedLabels": { +"description": "The list of detected labels for different rai categories.", +"items": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabels" +}, +"type": "array" +}, "raiCategories": { "description": "List of rai categories' information to return", "items": { @@ -16512,6 +16448,80 @@ }, "type": "object" }, +"CloudAiLargeModelsVisionRaiInfoDetectedLabels": { +"description": "Filters returning list of deteceted labels, scores, and bounding boxes.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabels", +"properties": { +"entities": { +"description": "The list of detected entities for the rai signal.", +"items": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity" +}, +"type": "array" +}, +"raiCategory": { +"description": "The RAI category for the deteceted labels.", +"type": "string" +} +}, +"type": "object" +}, +"CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox": { +"description": "An integer bounding box of original pixels of the image for the detected labels.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox", +"properties": { +"x1": { +"description": "The X coordinate of the top-left corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"x2": { +"description": "The X coordinate of the bottom-right corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"y1": { +"description": "The Y coordinate of the top-left corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"y2": { +"description": "The Y coordinate of the bottom-right corner, in pixels.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity": { +"description": "The properties for a detected entity from the rai signal.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity", +"properties": { +"boundingBox": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox", +"description": "Bounding box of the label" +}, +"description": { +"description": "Description of the label", +"type": "string" +}, +"iouScore": { +"description": "The intersection ratio between the detection bounding box and the mask.", +"format": "float", +"type": "number" +}, +"mid": { +"description": "MID of the label", +"type": "string" +}, +"score": { +"description": "Confidence score of the label", +"format": "float", +"type": "number" +} +}, +"type": "object" +}, "CloudAiLargeModelsVisionSemanticFilterResponse": { "id": "CloudAiLargeModelsVisionSemanticFilterResponse", "properties": { @@ -16545,6 +16555,75 @@ }, "type": "object" }, +"CloudAiPlatformCommonCreatePipelineJobApiErrorDetail": { +"description": "Create API error message for Vertex Pipeline. Next Id: 3.", +"id": "CloudAiPlatformCommonCreatePipelineJobApiErrorDetail", +"properties": { +"errorCause": { +"description": "The error root cause returned by CreatePipelineJob API.", +"enum": [ +"ERROR_CAUSE_UNSPECIFIED", +"INVALID_PIPELINE_SPEC_FORMAT", +"INVALID_PIPELINE_SPEC", +"INVALID_DEPLOYMENT_CONFIG", +"INVALID_DEPLOYMENT_SPEC", +"INVALID_INSTANCE_SCHEMA", +"INVALID_CUSTOM_JOB", +"INVALID_CONTAINER_SPEC", +"INVALID_NOTIFICATION_EMAIL_SETUP", +"INVALID_SERVICE_ACCOUNT_SETUP", +"INVALID_KMS_SETUP", +"INVALID_NETWORK_SETUP", +"INVALID_PIPELINE_TASK_SPEC", +"INVALID_PIPELINE_TASK_ARTIFACT", +"INVALID_IMPORTER_SPEC", +"INVALID_RESOLVER_SPEC", +"INVALID_RUNTIME_PARAMETERS", +"CLOUD_API_NOT_ENABLED", +"INVALID_GCS_INPUT_URI", +"INVALID_GCS_OUTPUT_URI", +"INVALID_COMPONENT_SPEC", +"INVALID_DAG_OUTPUTS_SPEC", +"INVALID_DAG_SPEC", +"INSUFFICIENT_QUOTA", +"INTERNAL" +], +"enumDescriptions": [ +"Should never be used.", +"IR Pipeline Spec can not been parsed to yaml or json format.", +"A pipeline spec is invalid.", +"A deployment config is invalid.", +"A deployment spec is invalid.", +"An instance schema is invalid.", +"A custom job is invalid.", +"A container spec is invalid.", +"Notification email setup is invalid.", +"Service account setup is invalid.", +"KMS setup is invalid.", +"Network setup is invalid.", +"Task spec is invalid.", +"Task artifact is invalid.", +"Importer spec is invalid.", +"Resolver spec is invalid.", +"Runtime Parameters are invalid.", +"Cloud API not enabled.", +"Invalid GCS input uri", +"Invalid GCS output uri", +"Component spec of pipeline is invalid.", +"DagOutputsSpec is invalid.", +"DagSpec is invalid.", +"Project does not have enough quota.", +"An internal error with unknown cause." +], +"type": "string" +}, +"publicMessage": { +"description": "Public messages contains actionable items for the error cause.", +"type": "string" +} +}, +"type": "object" +}, "GoogleApiHttpBody": { "description": "Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.", "id": "GoogleApiHttpBody", @@ -18911,6 +18990,142 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1DatasetDistribution": { +"description": "Distribution computed over a tuning dataset.", +"id": "GoogleCloudAiplatformV1DatasetDistribution", +"properties": { +"buckets": { +"description": "Output only. Defines the histogram bucket.", +"items": { +"$ref": "GoogleCloudAiplatformV1DatasetDistributionDistributionBucket" +}, +"readOnly": true, +"type": "array" +}, +"max": { +"description": "Output only. The maximum of the population values.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"mean": { +"description": "Output only. The arithmetic mean of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"median": { +"description": "Output only. The median of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"min": { +"description": "Output only. The minimum of the population values.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"p5": { +"description": "Output only. The 5th percentile of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"p95": { +"description": "Output only. The 95th percentile of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"sum": { +"description": "Output only. Sum of a given population of values.", +"format": "double", +"readOnly": true, +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1DatasetDistributionDistributionBucket": { +"description": "Dataset bucket used to create a histogram for the distribution given a population of values.", +"id": "GoogleCloudAiplatformV1DatasetDistributionDistributionBucket", +"properties": { +"count": { +"description": "Output only. Number of values in the bucket.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"left": { +"description": "Output only. Left bound of the bucket.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"right": { +"description": "Output only. Right bound of the bucket.", +"format": "double", +"readOnly": true, +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1DatasetStats": { +"description": "Statistics computed over a tuning dataset.", +"id": "GoogleCloudAiplatformV1DatasetStats", +"properties": { +"totalBillableCharacterCount": { +"description": "Output only. Number of billable characters in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"totalTuningCharacterCount": { +"description": "Output only. Number of tuning characters in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"tuningDatasetExampleCount": { +"description": "Output only. Number of examples in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"tuningStepCount": { +"description": "Output only. Number of tuning steps for this Tuning Job.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"userDatasetExamples": { +"description": "Output only. Sample user messages in the training dataset uri.", +"items": { +"$ref": "GoogleCloudAiplatformV1Content" +}, +"readOnly": true, +"type": "array" +}, +"userInputTokenDistribution": { +"$ref": "GoogleCloudAiplatformV1DatasetDistribution", +"description": "Output only. Dataset distributions for the user input tokens.", +"readOnly": true +}, +"userMessagePerExampleDistribution": { +"$ref": "GoogleCloudAiplatformV1DatasetDistribution", +"description": "Output only. Dataset distributions for the messages per example.", +"readOnly": true +}, +"userOutputTokenDistribution": { +"$ref": "GoogleCloudAiplatformV1DatasetDistribution", +"description": "Output only. Dataset distributions for the user output tokens.", +"readOnly": true +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1DatasetVersion": { "description": "Describes the dataset version.", "id": "GoogleCloudAiplatformV1DatasetVersion", @@ -19407,9 +19622,21 @@ "$ref": "GoogleCloudAiplatformV1DedicatedResources", "description": "Required. The underlying DedicatedResources that the DeploymentResourcePool uses." }, +"disableContainerLogging": { +"description": "If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.", +"type": "boolean" +}, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1EncryptionSpec", +"description": "Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec." +}, "name": { "description": "Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`", "type": "string" +}, +"serviceAccount": { +"description": "The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account.", +"type": "string" } }, "type": "object" @@ -19508,6 +19735,18 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1DistillationDataStats": { +"description": "Statistics computed for datasets used for distillation.", +"id": "GoogleCloudAiplatformV1DistillationDataStats", +"properties": { +"trainingDatasetStats": { +"$ref": "GoogleCloudAiplatformV1DatasetStats", +"description": "Output only. Statistics computed for the training dataset.", +"readOnly": true +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1DoubleArray": { "description": "A list of double values.", "id": "GoogleCloudAiplatformV1DoubleArray", @@ -20867,7 +21106,8 @@ "INT64_ARRAY", "STRING", "STRING_ARRAY", -"BYTES" +"BYTES", +"STRUCT" ], "enumDescriptions": [ "The value type is unspecified.", @@ -20879,7 +21119,8 @@ "Used for Feature that is a list of INT64.", "Used for Feature that is string.", "Used for Feature that is a list of String.", -"Used for Feature that is bytes." +"Used for Feature that is bytes.", +"Used for Feature that is struct." ], "type": "string" }, @@ -21025,6 +21266,10 @@ "$ref": "GoogleCloudAiplatformV1FeatureOnlineStoreDedicatedServingEndpoint", "description": "Optional. The dedicated serving endpoint for this FeatureOnlineStore, which is different from common Vertex service endpoint." }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1EncryptionSpec", +"description": "Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key." +}, "etag": { "description": "Optional. Used to perform consistent read-modify-write updates. If not set, a blind \"overwrite\" update happens.", "type": "string" @@ -21214,6 +21459,10 @@ "stringValue": { "description": "String feature value.", "type": "string" +}, +"structValue": { +"$ref": "GoogleCloudAiplatformV1StructValue", +"description": "A struct type feature value." } }, "type": "object" @@ -21997,6 +22246,36 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1FunctionCallingConfig": { +"description": "Function calling config.", +"id": "GoogleCloudAiplatformV1FunctionCallingConfig", +"properties": { +"allowedFunctionNames": { +"description": "Optional. Function names to call. Only set when the Mode is ANY. Function names should match [FunctionDeclaration.name]. With mode set to ANY, model will predict a function call from the set of function names provided.", +"items": { +"type": "string" +}, +"type": "array" +}, +"mode": { +"description": "Optional. Function calling mode.", +"enum": [ +"MODE_UNSPECIFIED", +"AUTO", +"ANY", +"NONE" +], +"enumDescriptions": [ +"Unspecified function calling mode. This value should not be used.", +"Default model behavior, model decides to predict either a function call or a natural language repspose.", +"Model is constrained to always predicting a function call only. If \"allowed_function_names\" are set, the predicted function call will be limited to any one of \"allowed_function_names\", else the predicted function call will be any one of the provided \"function_declarations\".", +"Model will not predict any function call. Model behavior is same as when not passing any function declarations." +], +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1FunctionDeclaration": { "description": "Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.", "id": "GoogleCloudAiplatformV1FunctionDeclaration", @@ -22086,6 +22365,10 @@ "$ref": "GoogleCloudAiplatformV1Content", "description": "Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph." }, +"toolConfig": { +"$ref": "GoogleCloudAiplatformV1ToolConfig", +"description": "Optional. Tool config. This config is shared for all tools provided in the request." +}, "tools": { "description": "Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.", "items": { @@ -22208,21 +22491,9 @@ "description": "Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.", "type": "string" }, -"responseStyle": { -"description": "Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED", -"enum": [ -"RESPONSE_STYLE_UNSPECIFIED", -"RESPONSE_STYLE_PRECISE", -"RESPONSE_STYLE_BALANCED", -"RESPONSE_STYLE_CREATIVE" -], -"enumDescriptions": [ -"response style unspecified.", -"Precise response.", -"Default response style.", -"Creative response style." -], -"type": "string" +"responseSchema": { +"$ref": "GoogleCloudAiplatformV1Schema", +"description": "Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response." }, "stopSequences": { "description": "Optional. Stop sequences.", @@ -22287,6 +22558,12 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1GoogleSearchRetrieval": { +"description": "Tool to retrieve public web data for grounding, powered by Google.", +"id": "GoogleCloudAiplatformV1GoogleSearchRetrieval", +"properties": {}, +"type": "object" +}, "GoogleCloudAiplatformV1GroundingMetadata": { "description": "Metadata returned to client when grounding is enabled.", "id": "GoogleCloudAiplatformV1GroundingMetadata", @@ -24174,6 +24451,10 @@ "readOnly": true, "type": "string" }, +"dataplexConfig": { +"$ref": "GoogleCloudAiplatformV1MetadataStoreDataplexConfig", +"description": "Optional. Dataplex integration settings." +}, "description": { "description": "Description of the MetadataStore.", "type": "string" @@ -24201,6 +24482,17 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1MetadataStoreDataplexConfig": { +"description": "Represents Dataplex integration settings.", +"id": "GoogleCloudAiplatformV1MetadataStoreDataplexConfig", +"properties": { +"enabledPipelinesLineage": { +"description": "Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines.", +"type": "boolean" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1MetadataStoreMetadataStoreState": { "description": "Represents state information for a MetadataStore.", "id": "GoogleCloudAiplatformV1MetadataStoreMetadataStoreState", @@ -26150,7 +26442,8 @@ "INVALID_ENCODING", "INVALID_SPARSE_DIMENSIONS", "INVALID_TOKEN_VALUE", -"INVALID_SPARSE_EMBEDDING" +"INVALID_SPARSE_EMBEDDING", +"INVALID_EMBEDDING" ], "enumDescriptions": [ "Default, shall not be used.", @@ -26169,7 +26462,8 @@ "File is not in UTF_8 format.", "Error parsing sparse dimensions field.", "Token restrict value is invalid.", -"Invalid sparse embedding." +"Invalid sparse embedding.", +"Invalid embedding." ], "type": "string" }, @@ -26306,40 +26600,6 @@ }, "type": "object" }, -"GoogleCloudAiplatformV1NotebookReservationAffinity": { -"description": "Notebook Reservation Affinity for consuming Zonal reservation.", -"id": "GoogleCloudAiplatformV1NotebookReservationAffinity", -"properties": { -"consumeReservationType": { -"description": "Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples.", -"enum": [ -"RESERVATION_AFFINITY_TYPE_UNSPECIFIED", -"RESERVATION_NONE", -"RESERVATION_ANY", -"RESERVATION_SPECIFIC" -], -"enumDescriptions": [ -"Default type.", -"Do not consume from any allocated capacity.", -"Consume any reservation available.", -"Must consume from a specific reservation. Must specify key value fields for specifying the reservations." -], -"type": "string" -}, -"key": { -"description": "Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value.", -"type": "string" -}, -"values": { -"description": "Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation.", -"items": { -"type": "string" -}, -"type": "array" -} -}, -"type": "object" -}, "GoogleCloudAiplatformV1NotebookRuntime": { "description": "A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.", "id": "GoogleCloudAiplatformV1NotebookRuntime", @@ -26358,6 +26618,11 @@ "description": "Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters.", "type": "string" }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1EncryptionSpec", +"description": "Output only. Customer-managed encryption key spec for the notebook runtime.", +"readOnly": true +}, "expirationTime": { "description": "Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade.", "format": "google-datetime", @@ -26379,6 +26644,11 @@ "readOnly": true, "type": "string" }, +"idleShutdownConfig": { +"$ref": "GoogleCloudAiplatformV1NotebookIdleShutdownConfig", +"description": "Output only. The idle shutdown configuration of the notebook runtime.", +"readOnly": true +}, "isUpgradable": { "description": "Output only. Whether NotebookRuntime is upgradable.", "readOnly": true, @@ -26428,11 +26698,6 @@ "readOnly": true, "type": "string" }, -"reservationAffinity": { -"$ref": "GoogleCloudAiplatformV1NotebookReservationAffinity", -"description": "Output only. Reservation Affinity of the notebook runtime.", -"readOnly": true -}, "runtimeState": { "description": "Output only. The runtime (instance) state of the NotebookRuntime.", "enum": [ @@ -26513,6 +26778,10 @@ "description": "Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.", "type": "string" }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1EncryptionSpec", +"description": "Customer-managed encryption key spec for the notebook runtime." +}, "etag": { "description": "Used to perform consistent read-modify-write updates. If not set, a blind \"overwrite\" update happens.", "type": "string" @@ -26570,10 +26839,6 @@ ], "type": "string" }, -"reservationAffinity": { -"$ref": "GoogleCloudAiplatformV1NotebookReservationAffinity", -"description": "Optional. Reservation Affinity of the notebook runtime template." -}, "serviceAccount": { "description": "The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.", "type": "string" @@ -27429,7 +27694,7 @@ "properties": { "exec": { "$ref": "GoogleCloudAiplatformV1ProbeExecAction", -"description": "Exec specifies the action to take." +"description": "ExecAction probes the health of a container by executing a command." }, "periodSeconds": { "description": "How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.", @@ -28024,10 +28289,41 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1RayMetricSpec": { +"description": "Configuration for the Ray metrics.", +"id": "GoogleCloudAiplatformV1RayMetricSpec", +"properties": { +"disabled": { +"description": "Optional. Flag to disable the Ray metrics collection.", +"type": "boolean" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1RaySpec": { "description": "Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes.", "id": "GoogleCloudAiplatformV1RaySpec", -"properties": {}, +"properties": { +"headNodeResourcePoolId": { +"description": "Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.", +"type": "string" +}, +"imageUri": { +"description": "Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from [Vertex prebuilt images](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.", +"type": "string" +}, +"rayMetricSpec": { +"$ref": "GoogleCloudAiplatformV1RayMetricSpec", +"description": "Optional. Ray metrics configurations." +}, +"resourcePoolImages": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { \"ray_head_node_pool\": \"head image\" \"ray_worker_node_pool1\": \"worker image\" \"ray_worker_node_pool2\": \"another worker image\" }", +"type": "object" +} +}, "type": "object" }, "GoogleCloudAiplatformV1ReadFeatureValuesRequest": { @@ -28256,6 +28552,23 @@ "properties": {}, "type": "object" }, +"GoogleCloudAiplatformV1ReinforcementLearningDataStats": { +"description": "Statistics computed for datasets used for reinforcement learning.", +"id": "GoogleCloudAiplatformV1ReinforcementLearningDataStats", +"properties": { +"preferenceDatasetStats": { +"$ref": "GoogleCloudAiplatformV1DatasetStats", +"description": "Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback.", +"readOnly": true +}, +"promptDatasetStats": { +"$ref": "GoogleCloudAiplatformV1DatasetStats", +"description": "Output only. Statistics computed for the prompt dataset used during reinforcement learning.", +"readOnly": true +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1RemoveContextChildrenRequest": { "description": "Request message for MetadataService.DeleteContextChildrenRequest.", "id": "GoogleCloudAiplatformV1RemoveContextChildrenRequest", @@ -28350,7 +28663,16 @@ "GoogleCloudAiplatformV1ResourceRuntime": { "description": "Persistent Cluster runtime information as output", "id": "GoogleCloudAiplatformV1ResourceRuntime", -"properties": {}, +"properties": { +"accessUris": { +"additionalProperties": { +"type": "string" +}, +"description": "Output only. URIs for user to connect to the Cluster. Example: { \"RAY_HEAD_NODE_INTERNAL_IP\": \"head-node-IP:10001\" \"RAY_DASHBOARD_URI\": \"ray-dashboard-address:8888\" }", +"readOnly": true, +"type": "object" +} +}, "type": "object" }, "GoogleCloudAiplatformV1ResourceRuntimeSpec": { @@ -30879,6 +31201,10 @@ false "$ref": "GoogleCloudAiplatformV1SchemaPredictParamsGroundingConfig", "description": "Grounding checking configuration." }, +"hasPromptVariable": { +"description": "Whether the prompt dataset has prompt variable.", +"type": "boolean" +}, "maxOutputTokens": { "description": "Value of the maximum number of tokens generated set when the dataset was saved.", "format": "int64", @@ -30899,6 +31225,10 @@ false }, "type": "array" }, +"systemInstruction": { +"description": "The content of the prompt dataset system instruction.", +"type": "string" +}, "systemInstructionGcsUri": { "description": "The Google Cloud Storage URI that stores the system instruction, starting with gs://.", "type": "string" @@ -33155,6 +33485,35 @@ false }, "type": "object" }, +"GoogleCloudAiplatformV1StructFieldValue": { +"description": "One field of a Struct (or object) type feature value.", +"id": "GoogleCloudAiplatformV1StructFieldValue", +"properties": { +"name": { +"description": "Name of the field in the struct feature.", +"type": "string" +}, +"value": { +"$ref": "GoogleCloudAiplatformV1FeatureValue", +"description": "The value for this field." +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1StructValue": { +"description": "Struct (or object) type feature value.", +"id": "GoogleCloudAiplatformV1StructValue", +"properties": { +"values": { +"description": "A list of field values.", +"items": { +"$ref": "GoogleCloudAiplatformV1StructFieldValue" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1Study": { "description": "A message representing a Study.", "id": "GoogleCloudAiplatformV1Study", @@ -34499,6 +34858,10 @@ false }, "type": "array" }, +"googleSearchRetrieval": { +"$ref": "GoogleCloudAiplatformV1GoogleSearchRetrieval", +"description": "Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search." +}, "retrieval": { "$ref": "GoogleCloudAiplatformV1Retrieval", "description": "Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation." @@ -34506,6 +34869,17 @@ false }, "type": "object" }, +"GoogleCloudAiplatformV1ToolConfig": { +"description": "Tool config. This config is shared for all tools provided in the request.", +"id": "GoogleCloudAiplatformV1ToolConfig", +"properties": { +"functionCallingConfig": { +"$ref": "GoogleCloudAiplatformV1FunctionCallingConfig", +"description": "Optional. Function calling config." +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1TrainingConfig": { "description": "CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.", "id": "GoogleCloudAiplatformV1TrainingConfig", @@ -34780,6 +35154,14 @@ false "description": "The tuning data statistic values for TuningJob.", "id": "GoogleCloudAiplatformV1TuningDataStats", "properties": { +"distillationDataStats": { +"$ref": "GoogleCloudAiplatformV1DistillationDataStats", +"description": "Statistics for distillation." +}, +"reinforcementLearningDataStats": { +"$ref": "GoogleCloudAiplatformV1ReinforcementLearningDataStats", +"description": "Statistics for reinforcement learning." +}, "supervisedTuningDataStats": { "$ref": "GoogleCloudAiplatformV1SupervisedTuningDataStats", "description": "The SFT Tuning data stats." diff --git a/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json b/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json index 15cb0fc6e3c..25559b9d355 100644 --- a/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json @@ -999,6 +999,158 @@ } } }, +"cachedContents": { +"methods": { +"create": { +"description": "Creates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/cachedContents", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.cachedContents.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"parent": { +"description": "Required. The parent resource where the cached content will be created", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+parent}/cachedContents", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes cached content", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/cachedContents/{cachedContentsId}", +"httpMethod": "DELETE", +"id": "aiplatform.projects.locations.cachedContents.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name referring to the cached content", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/cachedContents/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+name}", +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets cached content configurations", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/cachedContents/{cachedContentsId}", +"httpMethod": "GET", +"id": "aiplatform.projects.locations.cachedContents.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name referring to the cached content", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/cachedContents/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+name}", +"response": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists cached contents in a project", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/cachedContents", +"httpMethod": "GET", +"id": "aiplatform.projects.locations.cachedContents.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. The maximum number of cached contents to return. The service may return fewer than this value. If unspecified, some default (under maximum) number of items will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListCachedContents` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListCachedContents` must match the call that provided the page token.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The parent, which owns this collection of cached contents.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+parent}/cachedContents", +"response": { +"$ref": "GoogleCloudAiplatformV1beta1ListCachedContentsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates cached content configurations", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/cachedContents/{cachedContentsId}", +"httpMethod": "PATCH", +"id": "aiplatform.projects.locations.cachedContents.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/cachedContents/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Required. The list of fields to update.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta1/{+name}", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "customJobs": { "methods": { "cancel": { @@ -3660,7 +3812,8 @@ "$ref": "GoogleCloudAiplatformV1beta1CountTokensResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "create": { @@ -3774,7 +3927,8 @@ "$ref": "GoogleCloudAiplatformV1beta1DirectPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "directRawPredict": { @@ -3802,7 +3956,8 @@ "$ref": "GoogleCloudAiplatformV1beta1DirectRawPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "explain": { @@ -3830,7 +3985,8 @@ "$ref": "GoogleCloudAiplatformV1beta1ExplainResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "generateContent": { @@ -3858,7 +4014,8 @@ "$ref": "GoogleCloudAiplatformV1beta1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "get": { @@ -4051,7 +4208,8 @@ "$ref": "GoogleCloudAiplatformV1beta1PredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "rawPredict": { @@ -4079,7 +4237,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "serverStreamingPredict": { @@ -4107,7 +4266,8 @@ "$ref": "GoogleCloudAiplatformV1beta1StreamingPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "setIamPolicy": { @@ -4163,7 +4323,8 @@ "$ref": "GoogleCloudAiplatformV1beta1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "testIamPermissions": { @@ -4227,6 +4388,39 @@ } }, "resources": { +"chat": { +"methods": { +"completions": { +"description": "Exposes an OpenAI-compatible endpoint for chat completions.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/endpoints/{endpointsId}/chat/completions", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.endpoints.chat.completions", +"parameterOrder": [ +"endpoint" +], +"parameters": { +"endpoint": { +"description": "Required. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/openapi`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/endpoints/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+endpoint}/chat/completions", +"request": { +"$ref": "GoogleApiHttpBody" +}, +"response": { +"$ref": "GoogleApiHttpBody" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" +] +} +} +}, "operations": { "methods": { "cancel": { @@ -4991,7 +5185,7 @@ "type": "string" }, "updateMask": { -"description": "Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `tool_use_examples`", +"description": "Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `runtime_config` * `tool_use_examples` * `manifest.description`", "format": "google-fieldmask", "location": "query", "type": "string" @@ -13310,6 +13504,39 @@ }, "notebookExecutionJobs": { "methods": { +"create": { +"description": "Creates a NotebookExecutionJob.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/notebookExecutionJobs", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.notebookExecutionJobs.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"notebookExecutionJobId": { +"description": "Optional. User specified ID for the NotebookExecutionJob.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The resource name of the Location to create the NotebookExecutionJob. Format: `projects/{project}/locations/{location}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+parent}/notebookExecutionJobs", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1NotebookExecutionJob" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "delete": { "description": "Deletes a NotebookExecutionJob.", "flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/notebookExecutionJobs/{notebookExecutionJobsId}", @@ -13662,6 +13889,40 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"patch": { +"description": "Updates a NotebookRuntimeTemplate.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/notebookRuntimeTemplates/{notebookRuntimeTemplatesId}", +"httpMethod": "PATCH", +"id": "aiplatform.projects.locations.notebookRuntimeTemplates.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The resource name of the NotebookRuntimeTemplate.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/notebookRuntimeTemplates/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Required. The update mask applies to the resource. For the `FieldMask` definition, see google.protobuf.FieldMask. Input format: `{paths: \"${updated_filed}\"}` Updatable fields: * `encryption_spec.kms_key_name`", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta1/{+name}", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1NotebookRuntimeTemplate" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1beta1NotebookRuntimeTemplate" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "setIamPolicy": { "description": "Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.", "flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/notebookRuntimeTemplates/{notebookRuntimeTemplatesId}:setIamPolicy", @@ -14891,7 +15152,8 @@ "$ref": "GoogleCloudAiplatformV1beta1CountTokensResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "generateContent": { @@ -14919,7 +15181,8 @@ "$ref": "GoogleCloudAiplatformV1beta1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "getIamPolicy": { @@ -14978,7 +15241,8 @@ "$ref": "GoogleCloudAiplatformV1beta1PredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "rawPredict": { @@ -15006,7 +15270,8 @@ "$ref": "GoogleApiHttpBody" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "serverStreamingPredict": { @@ -15034,7 +15299,8 @@ "$ref": "GoogleCloudAiplatformV1beta1StreamingPredictResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] }, "streamGenerateContent": { @@ -15062,7 +15328,8 @@ "$ref": "GoogleCloudAiplatformV1beta1GenerateContentResponse" }, "scopes": [ -"https://www.googleapis.com/auth/cloud-platform" +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/cloud-platform.read-only" ] } } @@ -15737,6 +16004,40 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"patch": { +"description": "Updates a reasoning engine.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/reasoningEngines/{reasoningEnginesId}", +"httpMethod": "PATCH", +"id": "aiplatform.projects.locations.reasoningEngines.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Identifier. The resource name of the ReasoningEngine.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/reasoningEngines/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Required. Mask specifying which fields to update.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta1/{+name}", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1ReasoningEngine" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "query": { "description": "Queries using a reasoning engine.", "flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/reasoningEngines/{reasoningEnginesId}:query", @@ -19595,121 +19896,9 @@ } } }, -"revision": "20240510", +"revision": "20240529", "rootUrl": "https://aiplatform.googleapis.com/", "schemas": { -"CloudAiLargeModelsVisionFilteredText": { -"description": "Details for filtered input text.", -"id": "CloudAiLargeModelsVisionFilteredText", -"properties": { -"category": { -"description": "Confidence level", -"enum": [ -"RAI_CATEGORY_UNSPECIFIED", -"OBSCENE", -"SEXUALLY_EXPLICIT", -"IDENTITY_ATTACK", -"VIOLENCE_ABUSE", -"CSAI", -"SPII", -"CELEBRITY", -"FACE_IMG", -"WATERMARK_IMG", -"MEMORIZATION_IMG", -"CSAI_IMG", -"PORN_IMG", -"VIOLENCE_IMG", -"CHILD_IMG", -"TOXIC", -"SENSITIVE_WORD", -"PERSON_IMG", -"ICA_IMG", -"SEXUAL_IMG", -"IU_IMG", -"RACY_IMG", -"PEDO_IMG", -"DEATH_HARM_TRAGEDY", -"HEALTH", -"FIREARMS_WEAPONS", -"RELIGIOUS_BELIEF", -"ILLICIT_DRUGS", -"WAR_CONFLICT", -"POLITICS", -"HATE_SYMBOL_IMG", -"CHILD_TEXT", -"DANGEROUS_CONTENT", -"RECITATION_TEXT", -"CELEBRITY_IMG", -"WATERMARK_IMG_REMOVAL" -], -"enumDescriptions": [ -"", -"", -"Porn", -"Hate", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"", -"SafetyAttributes returned but not filtered on", -"", -"", -"", -"", -"", -"", -"End of list", -"", -"Text category from SafetyCat v3", -"", -"", -"Error message when user attempts to remove watermark from editing image" -], -"type": "string" -}, -"confidence": { -"description": "Filtered category", -"enum": [ -"CONFIDENCE_UNSPECIFIED", -"CONFIDENCE_LOW", -"CONFIDENCE_MEDIUM", -"CONFIDENCE_HIGH" -], -"enumDescriptions": [ -"", -"", -"", -"" -], -"type": "string" -}, -"prompt": { -"description": "Input prompt", -"type": "string" -}, -"score": { -"description": "Score for category", -"format": "double", -"type": "number" -} -}, -"type": "object" -}, "CloudAiLargeModelsVisionGenerateVideoResponse": { "description": "Generate video response.", "id": "CloudAiLargeModelsVisionGenerateVideoResponse", @@ -19721,10 +19910,6 @@ }, "type": "array" }, -"raiErrorMessage": { -"description": "Returns rai error message for filtered videos.", -"type": "string" -}, "raiMediaFilteredCount": { "description": "Returns if any videos were filtered due to RAI policies.", "format": "int32", @@ -19736,10 +19921,6 @@ "type": "string" }, "type": "array" -}, -"raiTextFilteredReason": { -"$ref": "CloudAiLargeModelsVisionFilteredText", -"description": "Returns filtered text rai info." } }, "type": "object" @@ -19851,6 +20032,13 @@ "CloudAiLargeModelsVisionRaiInfo": { "id": "CloudAiLargeModelsVisionRaiInfo", "properties": { +"detectedLabels": { +"description": "The list of detected labels for different rai categories.", +"items": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabels" +}, +"type": "array" +}, "raiCategories": { "description": "List of rai categories' information to return", "items": { @@ -19869,6 +20057,80 @@ }, "type": "object" }, +"CloudAiLargeModelsVisionRaiInfoDetectedLabels": { +"description": "Filters returning list of deteceted labels, scores, and bounding boxes.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabels", +"properties": { +"entities": { +"description": "The list of detected entities for the rai signal.", +"items": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity" +}, +"type": "array" +}, +"raiCategory": { +"description": "The RAI category for the deteceted labels.", +"type": "string" +} +}, +"type": "object" +}, +"CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox": { +"description": "An integer bounding box of original pixels of the image for the detected labels.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox", +"properties": { +"x1": { +"description": "The X coordinate of the top-left corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"x2": { +"description": "The X coordinate of the bottom-right corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"y1": { +"description": "The Y coordinate of the top-left corner, in pixels.", +"format": "int32", +"type": "integer" +}, +"y2": { +"description": "The Y coordinate of the bottom-right corner, in pixels.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity": { +"description": "The properties for a detected entity from the rai signal.", +"id": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsEntity", +"properties": { +"boundingBox": { +"$ref": "CloudAiLargeModelsVisionRaiInfoDetectedLabelsBoundingBox", +"description": "Bounding box of the label" +}, +"description": { +"description": "Description of the label", +"type": "string" +}, +"iouScore": { +"description": "The intersection ratio between the detection bounding box and the mask.", +"format": "float", +"type": "number" +}, +"mid": { +"description": "MID of the label", +"type": "string" +}, +"score": { +"description": "Confidence score of the label", +"format": "float", +"type": "number" +} +}, +"type": "object" +}, "CloudAiLargeModelsVisionSemanticFilterResponse": { "id": "CloudAiLargeModelsVisionSemanticFilterResponse", "properties": { @@ -19902,6 +20164,75 @@ }, "type": "object" }, +"CloudAiPlatformCommonCreatePipelineJobApiErrorDetail": { +"description": "Create API error message for Vertex Pipeline. Next Id: 3.", +"id": "CloudAiPlatformCommonCreatePipelineJobApiErrorDetail", +"properties": { +"errorCause": { +"description": "The error root cause returned by CreatePipelineJob API.", +"enum": [ +"ERROR_CAUSE_UNSPECIFIED", +"INVALID_PIPELINE_SPEC_FORMAT", +"INVALID_PIPELINE_SPEC", +"INVALID_DEPLOYMENT_CONFIG", +"INVALID_DEPLOYMENT_SPEC", +"INVALID_INSTANCE_SCHEMA", +"INVALID_CUSTOM_JOB", +"INVALID_CONTAINER_SPEC", +"INVALID_NOTIFICATION_EMAIL_SETUP", +"INVALID_SERVICE_ACCOUNT_SETUP", +"INVALID_KMS_SETUP", +"INVALID_NETWORK_SETUP", +"INVALID_PIPELINE_TASK_SPEC", +"INVALID_PIPELINE_TASK_ARTIFACT", +"INVALID_IMPORTER_SPEC", +"INVALID_RESOLVER_SPEC", +"INVALID_RUNTIME_PARAMETERS", +"CLOUD_API_NOT_ENABLED", +"INVALID_GCS_INPUT_URI", +"INVALID_GCS_OUTPUT_URI", +"INVALID_COMPONENT_SPEC", +"INVALID_DAG_OUTPUTS_SPEC", +"INVALID_DAG_SPEC", +"INSUFFICIENT_QUOTA", +"INTERNAL" +], +"enumDescriptions": [ +"Should never be used.", +"IR Pipeline Spec can not been parsed to yaml or json format.", +"A pipeline spec is invalid.", +"A deployment config is invalid.", +"A deployment spec is invalid.", +"An instance schema is invalid.", +"A custom job is invalid.", +"A container spec is invalid.", +"Notification email setup is invalid.", +"Service account setup is invalid.", +"KMS setup is invalid.", +"Network setup is invalid.", +"Task spec is invalid.", +"Task artifact is invalid.", +"Importer spec is invalid.", +"Resolver spec is invalid.", +"Runtime Parameters are invalid.", +"Cloud API not enabled.", +"Invalid GCS input uri", +"Invalid GCS output uri", +"Component spec of pipeline is invalid.", +"DagOutputsSpec is invalid.", +"DagSpec is invalid.", +"Project does not have enough quota.", +"An internal error with unknown cause." +], +"type": "string" +}, +"publicMessage": { +"description": "Public messages contains actionable items for the error cause.", +"type": "string" +} +}, +"type": "object" +}, "GoogleApiHttpBody": { "description": "Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.", "id": "GoogleApiHttpBody", @@ -21267,6 +21598,65 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1CachedContent": { +"description": "A resource used in LLM queries for users to explicitly specify what to cache and how to cache.", +"id": "GoogleCloudAiplatformV1beta1CachedContent", +"properties": { +"contents": { +"description": "Optional. Input only. Immutable. The content to cache", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1Content" +}, +"type": "array" +}, +"createTime": { +"description": "Output only. Creatation time of the cache entry.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"expireTime": { +"description": "Timestamp of when this resource is considered expired. This is *always* provided on output, regardless of what was sent on input.", +"format": "google-datetime", +"type": "string" +}, +"model": { +"description": "Immutable. The name of the publisher model to use for cached content. Format: projects/{project}/locations/{location}/publishers/{publisher}/models/{model}", +"type": "string" +}, +"name": { +"description": "Immutable. Identifier. The resource name of the cached content Format: projects/{project}/locations/{location}/cachedContents/{cached_content}", +"type": "string" +}, +"systemInstruction": { +"$ref": "GoogleCloudAiplatformV1beta1Content", +"description": "Optional. Input only. Immutable. Developer set system instruction. Currently, text only" +}, +"toolConfig": { +"$ref": "GoogleCloudAiplatformV1beta1ToolConfig", +"description": "Optional. Input only. Immutable. Tool config. This config is shared for all tools" +}, +"tools": { +"description": "Optional. Input only. Immutable. A list of `Tools` the model may use to generate the next response", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1Tool" +}, +"type": "array" +}, +"ttl": { +"description": "Input only. The TTL for this resource. The expiration time is computed: now + TTL.", +"format": "google-duration", +"type": "string" +}, +"updateTime": { +"description": "Output only. When the cache entry was last updated in UTC time.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1CancelBatchPredictionJobRequest": { "description": "Request message for JobService.CancelBatchPredictionJob.", "id": "GoogleCloudAiplatformV1beta1CancelBatchPredictionJobRequest", @@ -22054,6 +22444,21 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1CreateNotebookExecutionJobOperationMetadata": { +"description": "Metadata information for NotebookService.CreateNotebookExecutionJob.", +"id": "GoogleCloudAiplatformV1beta1CreateNotebookExecutionJobOperationMetadata", +"properties": { +"genericMetadata": { +"$ref": "GoogleCloudAiplatformV1beta1GenericOperationMetadata", +"description": "The operation generic information." +}, +"progressMessage": { +"description": "A human-readable message that shows the intermediate progress details of NotebookRuntime.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1CreateNotebookExecutionJobRequest": { "description": "Request message for [NotebookService.CreateNotebookExecutionJob]", "id": "GoogleCloudAiplatformV1beta1CreateNotebookExecutionJobRequest", @@ -22663,6 +23068,142 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1DatasetDistribution": { +"description": "Distribution computed over a tuning dataset.", +"id": "GoogleCloudAiplatformV1beta1DatasetDistribution", +"properties": { +"buckets": { +"description": "Output only. Defines the histogram bucket.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetDistributionDistributionBucket" +}, +"readOnly": true, +"type": "array" +}, +"max": { +"description": "Output only. The maximum of the population values.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"mean": { +"description": "Output only. The arithmetic mean of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"median": { +"description": "Output only. The median of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"min": { +"description": "Output only. The minimum of the population values.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"p5": { +"description": "Output only. The 5th percentile of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"p95": { +"description": "Output only. The 95th percentile of the values in the population.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"sum": { +"description": "Output only. Sum of a given population of values.", +"format": "double", +"readOnly": true, +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1DatasetDistributionDistributionBucket": { +"description": "Dataset bucket used to create a histogram for the distribution given a population of values.", +"id": "GoogleCloudAiplatformV1beta1DatasetDistributionDistributionBucket", +"properties": { +"count": { +"description": "Output only. Number of values in the bucket.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"left": { +"description": "Output only. Left bound of the bucket.", +"format": "double", +"readOnly": true, +"type": "number" +}, +"right": { +"description": "Output only. Right bound of the bucket.", +"format": "double", +"readOnly": true, +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1DatasetStats": { +"description": "Statistics computed over a tuning dataset.", +"id": "GoogleCloudAiplatformV1beta1DatasetStats", +"properties": { +"totalBillableCharacterCount": { +"description": "Output only. Number of billable characters in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"totalTuningCharacterCount": { +"description": "Output only. Number of tuning characters in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"tuningDatasetExampleCount": { +"description": "Output only. Number of examples in the tuning dataset.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"tuningStepCount": { +"description": "Output only. Number of tuning steps for this Tuning Job.", +"format": "int64", +"readOnly": true, +"type": "string" +}, +"userDatasetExamples": { +"description": "Output only. Sample user messages in the training dataset uri.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1Content" +}, +"readOnly": true, +"type": "array" +}, +"userInputTokenDistribution": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetDistribution", +"description": "Output only. Dataset distributions for the user input tokens.", +"readOnly": true +}, +"userMessagePerExampleDistribution": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetDistribution", +"description": "Output only. Dataset distributions for the messages per example.", +"readOnly": true +}, +"userOutputTokenDistribution": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetDistribution", +"description": "Output only. Dataset distributions for the user output tokens.", +"readOnly": true +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1DatasetVersion": { "description": "Describes the dataset version.", "id": "GoogleCloudAiplatformV1beta1DatasetVersion", @@ -23170,9 +23711,21 @@ "$ref": "GoogleCloudAiplatformV1beta1DedicatedResources", "description": "Required. The underlying DedicatedResources that the DeploymentResourcePool uses." }, +"disableContainerLogging": { +"description": "If the DeploymentResourcePool is deployed with custom-trained Models or AutoML Tabular Models, the container(s) of the DeploymentResourcePool will send `stderr` and `stdout` streams to Cloud Logging by default. Please note that the logs incur cost, which are subject to [Cloud Logging pricing](https://cloud.google.com/logging/pricing). User can disable container logging by setting this flag to true.", +"type": "boolean" +}, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1beta1EncryptionSpec", +"description": "Customer-managed encryption key spec for a DeploymentResourcePool. If set, this DeploymentResourcePool will be secured by this key. Endpoints and the DeploymentResourcePool they deploy in need to have the same EncryptionSpec." +}, "name": { "description": "Immutable. The resource name of the DeploymentResourcePool. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}`", "type": "string" +}, +"serviceAccount": { +"description": "The service account that the DeploymentResourcePool's container(s) run as. Specify the email address of the service account. If this service account is not specified, the container(s) run as a service account that doesn't have access to the resource project. Users deploying the Models to this DeploymentResourcePool must have the `iam.serviceAccounts.actAs` permission on this service account.", +"type": "string" } }, "type": "object" @@ -23277,6 +23830,88 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1DistillationDataStats": { +"description": "Statistics computed for datasets used for distillation.", +"id": "GoogleCloudAiplatformV1beta1DistillationDataStats", +"properties": { +"trainingDatasetStats": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetStats", +"description": "Output only. Statistics computed for the training dataset.", +"readOnly": true +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1DistillationHyperParameters": { +"description": "Hyperparameters for Distillation.", +"id": "GoogleCloudAiplatformV1beta1DistillationHyperParameters", +"properties": { +"adapterSize": { +"description": "Optional. Adapter size for distillation.", +"enum": [ +"ADAPTER_SIZE_UNSPECIFIED", +"ADAPTER_SIZE_ONE", +"ADAPTER_SIZE_FOUR", +"ADAPTER_SIZE_EIGHT", +"ADAPTER_SIZE_SIXTEEN" +], +"enumDescriptions": [ +"Adapter size is unspecified.", +"Adapter size 1.", +"Adapter size 4.", +"Adapter size 8.", +"Adapter size 16." +], +"type": "string" +}, +"epochCount": { +"description": "Optional. Number of complete passes the model makes over the entire training dataset during training.", +"format": "int64", +"type": "string" +}, +"learningRateMultiplier": { +"description": "Optional. Multiplier for adjusting the default learning rate.", +"format": "double", +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1DistillationSpec": { +"description": "Tuning Spec for Distillation.", +"id": "GoogleCloudAiplatformV1beta1DistillationSpec", +"properties": { +"baseTeacherModel": { +"description": "The base teacher model that is being distilled, e.g., \"gemini-1.0-pro-002\".", +"type": "string" +}, +"hyperParameters": { +"$ref": "GoogleCloudAiplatformV1beta1DistillationHyperParameters", +"description": "Optional. Hyperparameters for Distillation." +}, +"pipelineRootDirectory": { +"description": "Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the distillation pipeline. It is used by the system to generate the paths of output artifacts.", +"type": "string" +}, +"studentModel": { +"description": "The student model that is being tuned, e.g., \"google/gemma-2b-it\".", +"type": "string" +}, +"trainingDatasetUri": { +"description": "Required. Cloud Storage path to file containing training dataset for tuning. The dataset must be formatted as a JSONL file.", +"type": "string" +}, +"tunedTeacherModelSource": { +"description": "The resource name of the Tuned teacher model. Format: `projects/{project}/locations/{location}/models/{model}`.", +"type": "string" +}, +"validationDatasetUri": { +"description": "Optional. Cloud Storage path to file containing validation dataset for tuning. The dataset must be formatted as a JSONL file.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1DoubleArray": { "description": "A list of double values.", "id": "GoogleCloudAiplatformV1beta1DoubleArray", @@ -24916,7 +25551,7 @@ "description": "Required. Immutable. Type of auth supported by this extension." }, "description": { -"description": "Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning.", +"description": "Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. e.g., if the extension is a data store, you can let the LLM know what data it contains.", "type": "string" }, "name": { @@ -25044,7 +25679,8 @@ "INT64_ARRAY", "STRING", "STRING_ARRAY", -"BYTES" +"BYTES", +"STRUCT" ], "enumDescriptions": [ "The value type is unspecified.", @@ -25056,7 +25692,8 @@ "Used for Feature that is a list of INT64.", "Used for Feature that is string.", "Used for Feature that is a list of String.", -"Used for Feature that is bytes." +"Used for Feature that is bytes.", +"Used for Feature that is struct." ], "type": "string" }, @@ -25207,6 +25844,10 @@ "deprecated": true, "description": "Optional. Deprecated: This field is no longer needed anymore and embedding management is automatically enabled when specifying Optimized storage type." }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1beta1EncryptionSpec", +"description": "Optional. Customer-managed encryption key spec for data storage. If set, online store will be secured by this key." +}, "etag": { "description": "Optional. Used to perform consistent read-modify-write updates. If not set, a blind \"overwrite\" update happens.", "type": "string" @@ -25417,6 +26058,10 @@ "stringValue": { "description": "String feature value.", "type": "string" +}, +"structValue": { +"$ref": "GoogleCloudAiplatformV1beta1StructValue", +"description": "A struct type feature value." } }, "type": "object" @@ -26582,6 +27227,10 @@ "description": "Request message for [PredictionService.GenerateContent].", "id": "GoogleCloudAiplatformV1beta1GenerateContentRequest", "properties": { +"cachedContent": { +"description": "Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: `projects/{project}/locations/{location}/cachedContents/{cachedContent}`", +"type": "string" +}, "contents": { "description": "Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.", "items": { @@ -26730,21 +27379,9 @@ "description": "Optional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.", "type": "string" }, -"responseStyle": { -"description": "Optional. Control Three levels of creativity in the model output. Default: RESPONSE_STYLE_BALANCED", -"enum": [ -"RESPONSE_STYLE_UNSPECIFIED", -"RESPONSE_STYLE_PRECISE", -"RESPONSE_STYLE_BALANCED", -"RESPONSE_STYLE_CREATIVE" -], -"enumDescriptions": [ -"response style unspecified.", -"Precise response.", -"Default response style.", -"Creative response style." -], -"type": "string" +"responseSchema": { +"$ref": "GoogleCloudAiplatformV1beta1Schema", +"description": "Optional. The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). If set, a compatible response_mime_type must also be set. Compatible mimetypes: `application/json`: Schema for JSON response." }, "stopSequences": { "description": "Optional. Stop sequences.", @@ -26848,6 +27485,12 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1GoogleSearchRetrieval": { +"description": "Tool to retrieve public web data for grounding, powered by Google.", +"id": "GoogleCloudAiplatformV1beta1GoogleSearchRetrieval", +"properties": {}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1GroundednessInput": { "description": "Input for groundedness metric.", "id": "GoogleCloudAiplatformV1beta1GroundednessInput", @@ -26939,6 +27582,17 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1HumanFeedbackConfig": { +"description": "Configures Reinforcement Learning to use human feedback during tuning.", +"id": "GoogleCloudAiplatformV1beta1HumanFeedbackConfig", +"properties": { +"preferenceDatasetUri": { +"description": "Required. Cloud Storage path to human preference data.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1HyperparameterTuningJob": { "description": "Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.", "id": "GoogleCloudAiplatformV1beta1HyperparameterTuningJob", @@ -27785,19 +28439,21 @@ "EUC_METADATA_API_STATE", "EUC_AGENT_API_STATE", "IDLE_SHUTDOWN_AGENT_STATE", -"PROXY_AGENT_STATE" +"PROXY_AGENT_STATE", +"GCR_DNS_STATE" ], "enumDescriptions": [ "Service name unknown.", "Represents the internal os docker client.", -"Represents reoslving DNS for the control plane api endpoint.", -"Represents reoslving DNS for the proxy registration endpoint.", +"Represents resolving DNS for the control plane api endpoint.", +"Represents resolving DNS for the proxy registration endpoint.", "Represents the jupyter endpoint.", "Represents the jupyter/api endpoint.", "Represents the EUC metadata server API endpoint.", "Represents the EUC agent server API endpoint.", "Represents the idle shutdown agent sidecar container.", -"Represents the proxy agent sidecar container." +"Represents the proxy agent sidecar container.", +"Represents resolving DNS for the gcr.io endpoint." ], "type": "string" }, @@ -27911,6 +28567,24 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1ListCachedContentsResponse": { +"description": "Response with a list of CachedContents.", +"id": "GoogleCloudAiplatformV1beta1ListCachedContentsResponse", +"properties": { +"cachedContents": { +"description": "List of cached contents.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1CachedContent" +}, +"type": "array" +}, +"nextPageToken": { +"description": "A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1ListContextsResponse": { "description": "Response message for MetadataService.ListContexts.", "id": "GoogleCloudAiplatformV1beta1ListContextsResponse", @@ -29042,6 +29716,10 @@ "readOnly": true, "type": "string" }, +"dataplexConfig": { +"$ref": "GoogleCloudAiplatformV1beta1MetadataStoreDataplexConfig", +"description": "Optional. Dataplex integration settings." +}, "description": { "description": "Description of the MetadataStore.", "type": "string" @@ -29069,6 +29747,17 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1MetadataStoreDataplexConfig": { +"description": "Represents Dataplex integration settings.", +"id": "GoogleCloudAiplatformV1beta1MetadataStoreDataplexConfig", +"properties": { +"enabledPipelinesLineage": { +"description": "Optional. Whether or not Data Lineage synchronization is enabled for Vertex Pipelines.", +"type": "boolean" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1MetadataStoreMetadataStoreState": { "description": "Represents state information for a MetadataStore.", "id": "GoogleCloudAiplatformV1beta1MetadataStoreMetadataStoreState", @@ -30336,6 +31025,28 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1ModelMonitoringGenAiStats": { +"description": "A collection of data points that describes the time-varying values of a gen ai metric.", +"id": "GoogleCloudAiplatformV1beta1ModelMonitoringGenAiStats", +"properties": { +"dataPoints": { +"description": "The data points of this time series. When listing time series, points are returned in reverse time order.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1ModelMonitoringStatsDataPoint" +}, +"type": "array" +}, +"objectiveType": { +"description": "One of the supported monitoring objectives: `gen-ai-general` `gen-ai-evaluation` `gen-ai-safety`", +"type": "string" +}, +"statsName": { +"description": "The stats name.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1ModelMonitoringInput": { "description": "Model monitoring data input spec.", "id": "GoogleCloudAiplatformV1beta1ModelMonitoringInput", @@ -30980,6 +31691,10 @@ "description": "Represents the collection of statistics for a metric.", "id": "GoogleCloudAiplatformV1beta1ModelMonitoringStats", "properties": { +"genAiStats": { +"$ref": "GoogleCloudAiplatformV1beta1ModelMonitoringGenAiStats", +"description": "Generated gen ai statistics." +}, "tabularStats": { "$ref": "GoogleCloudAiplatformV1beta1ModelMonitoringTabularStats", "description": "Generated tabular statistics." @@ -31116,7 +31831,7 @@ "id": "GoogleCloudAiplatformV1beta1ModelMonitoringStatsDataPointTypedValueDistributionDataValue", "properties": { "distribution": { -"description": "tensorflow.metadata.v0.DatasetFeatureStatistics format.", +"description": "Predictive monitoring drift distribution in `tensorflow.metadata.v0.DatasetFeatureStatistics` format.", "type": "any" }, "distributionDeviation": { @@ -31775,7 +32490,8 @@ "INVALID_ENCODING", "INVALID_SPARSE_DIMENSIONS", "INVALID_TOKEN_VALUE", -"INVALID_SPARSE_EMBEDDING" +"INVALID_SPARSE_EMBEDDING", +"INVALID_EMBEDDING" ], "enumDescriptions": [ "Default, shall not be used.", @@ -31794,7 +32510,8 @@ "File is not in UTF_8 format.", "Error parsing sparse dimensions field.", "Token restrict value is invalid.", -"Invalid sparse embedding." +"Invalid sparse embedding.", +"Invalid embedding." ], "type": "string" }, @@ -31925,10 +32642,6 @@ "readOnly": true, "type": "string" }, -"customEnvironmentSpec": { -"$ref": "GoogleCloudAiplatformV1beta1NotebookExecutionJobCustomEnvironmentSpec", -"description": "The custom compute configuration for an execution job." -}, "dataformRepositorySource": { "$ref": "GoogleCloudAiplatformV1beta1NotebookExecutionJobDataformRepositorySource", "description": "The Dataform Repository pointing to a single file notebook repository." @@ -32030,25 +32743,6 @@ }, "type": "object" }, -"GoogleCloudAiplatformV1beta1NotebookExecutionJobCustomEnvironmentSpec": { -"description": "Compute configuration to use for an execution job.", -"id": "GoogleCloudAiplatformV1beta1NotebookExecutionJobCustomEnvironmentSpec", -"properties": { -"machineSpec": { -"$ref": "GoogleCloudAiplatformV1beta1MachineSpec", -"description": "The specification of a single machine for the execution job." -}, -"networkSpec": { -"$ref": "GoogleCloudAiplatformV1beta1NetworkSpec", -"description": "The network configuration to use for the execution job." -}, -"persistentDiskSpec": { -"$ref": "GoogleCloudAiplatformV1beta1PersistentDiskSpec", -"description": "The specification of a persistent disk to attach for the execution job." -} -}, -"type": "object" -}, "GoogleCloudAiplatformV1beta1NotebookExecutionJobDataformRepositorySource": { "description": "The Dataform Repository containing the input notebook.", "id": "GoogleCloudAiplatformV1beta1NotebookExecutionJobDataformRepositorySource", @@ -32107,40 +32801,6 @@ }, "type": "object" }, -"GoogleCloudAiplatformV1beta1NotebookReservationAffinity": { -"description": "Notebook Reservation Affinity for consuming Zonal reservation.", -"id": "GoogleCloudAiplatformV1beta1NotebookReservationAffinity", -"properties": { -"consumeReservationType": { -"description": "Required. Specifies the type of reservation from which this instance can consume resources: RESERVATION_ANY (default), RESERVATION_SPECIFIC, or RESERVATION_NONE. See Consuming reserved instances for examples.", -"enum": [ -"RESERVATION_AFFINITY_TYPE_UNSPECIFIED", -"RESERVATION_NONE", -"RESERVATION_ANY", -"RESERVATION_SPECIFIC" -], -"enumDescriptions": [ -"Default type.", -"Do not consume from any allocated capacity.", -"Consume any reservation available.", -"Must consume from a specific reservation. Must specify key value fields for specifying the reservations." -], -"type": "string" -}, -"key": { -"description": "Optional. Corresponds to the label key of a reservation resource. To target a RESERVATION_SPECIFIC by name, use compute.googleapis.com/reservation-name as the key and specify the name of your reservation as its value.", -"type": "string" -}, -"values": { -"description": "Optional. Corresponds to the label values of a reservation resource. This must be the full path name of Reservation.", -"items": { -"type": "string" -}, -"type": "array" -} -}, -"type": "object" -}, "GoogleCloudAiplatformV1beta1NotebookRuntime": { "description": "A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.", "id": "GoogleCloudAiplatformV1beta1NotebookRuntime", @@ -32159,6 +32819,11 @@ "description": "Required. The display name of the NotebookRuntime. The name can be up to 128 characters long and can consist of any UTF-8 characters.", "type": "string" }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1beta1EncryptionSpec", +"description": "Output only. Customer-managed encryption key spec for the notebook runtime.", +"readOnly": true +}, "expirationTime": { "description": "Output only. Timestamp when this NotebookRuntime will be expired: 1. System Predefined NotebookRuntime: 24 hours after creation. After expiration, system predifined runtime will be deleted. 2. User created NotebookRuntime: 6 months after last upgrade. After expiration, user created runtime will be stopped and allowed for upgrade.", "format": "google-datetime", @@ -32180,6 +32845,11 @@ "readOnly": true, "type": "string" }, +"idleShutdownConfig": { +"$ref": "GoogleCloudAiplatformV1beta1NotebookIdleShutdownConfig", +"description": "Output only. The idle shutdown configuration of the notebook runtime.", +"readOnly": true +}, "isUpgradable": { "description": "Output only. Whether NotebookRuntime is upgradable.", "readOnly": true, @@ -32229,11 +32899,6 @@ "readOnly": true, "type": "string" }, -"reservationAffinity": { -"$ref": "GoogleCloudAiplatformV1beta1NotebookReservationAffinity", -"description": "Output only. Reservation Affinity of the notebook runtime.", -"readOnly": true -}, "runtimeState": { "description": "Output only. The runtime (instance) state of the NotebookRuntime.", "enum": [ @@ -32314,6 +32979,10 @@ "description": "Required. The display name of the NotebookRuntimeTemplate. The name can be up to 128 characters long and can consist of any UTF-8 characters.", "type": "string" }, +"encryptionSpec": { +"$ref": "GoogleCloudAiplatformV1beta1EncryptionSpec", +"description": "Customer-managed encryption key spec for the notebook runtime." +}, "etag": { "description": "Used to perform consistent read-modify-write updates. If not set, a blind \"overwrite\" update happens.", "type": "string" @@ -32371,10 +33040,6 @@ ], "type": "string" }, -"reservationAffinity": { -"$ref": "GoogleCloudAiplatformV1beta1NotebookReservationAffinity", -"description": "Optional. Reservation Affinity of the notebook runtime template." -}, "serviceAccount": { "description": "The service account that the runtime workload runs as. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.", "type": "string" @@ -33420,7 +34085,7 @@ "properties": { "exec": { "$ref": "GoogleCloudAiplatformV1beta1ProbeExecAction", -"description": "Exec specifies the action to take." +"description": "ExecAction probes the health of a container by executing a command." }, "periodSeconds": { "description": "How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'.", @@ -34918,6 +35583,72 @@ "properties": {}, "type": "object" }, +"GoogleCloudAiplatformV1beta1ReinforcementLearningDataStats": { +"description": "Statistics computed for datasets used for reinforcement learning.", +"id": "GoogleCloudAiplatformV1beta1ReinforcementLearningDataStats", +"properties": { +"preferenceDatasetStats": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetStats", +"description": "Output only. Statistics computed for the preference dataset. This can be either a human preference dataset or a preference dataset generated from AI feedback.", +"readOnly": true +}, +"promptDatasetStats": { +"$ref": "GoogleCloudAiplatformV1beta1DatasetStats", +"description": "Output only. Statistics computed for the prompt dataset used during reinforcement learning.", +"readOnly": true +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1ReinforcementLearningHyperParameters": { +"description": "Hyperparameters for Reinforcement Learning.", +"id": "GoogleCloudAiplatformV1beta1ReinforcementLearningHyperParameters", +"properties": { +"epochCount": { +"description": "Optional. Number of training epoches for the tuning job.", +"format": "int64", +"type": "string" +}, +"humanFeedbackConfig": { +"$ref": "GoogleCloudAiplatformV1beta1HumanFeedbackConfig", +"description": "Configures Reinforcement Learning to use human feedback for preference data during tuning." +}, +"klCoefficient": { +"description": "Optional. KL divergence coefficient for Reinforcement Learning.", +"format": "double", +"type": "number" +}, +"learningRateMultiplier": { +"description": "Optional. Learning rate multiplier for Reinforcement Learning.", +"format": "double", +"type": "number" +}, +"rewardModelTrainingConfig": { +"$ref": "GoogleCloudAiplatformV1beta1RewardModelTrainingConfig", +"description": "Configures Reinforcement Learning to train a reward model to learn preference." +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1ReinforcementLearningSpec": { +"description": "Tuning Spec for Reinforcement Learning.", +"id": "GoogleCloudAiplatformV1beta1ReinforcementLearningSpec", +"properties": { +"hyperParameters": { +"$ref": "GoogleCloudAiplatformV1beta1ReinforcementLearningHyperParameters", +"description": "Optional. Additional hyper-parameters to use during tuning." +}, +"promptDatasetUri": { +"description": "Required. Cloud Storage path to the prompt dataset to use during Reinforcement Learning.", +"type": "string" +}, +"validationDatasetUri": { +"description": "Optional. Cloud Storage path to the validation dataset to use during Reinforcement Learning.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1RemoveContextChildrenRequest": { "description": "Request message for MetadataService.DeleteContextChildrenRequest.", "id": "GoogleCloudAiplatformV1beta1RemoveContextChildrenRequest", @@ -35117,6 +35848,7 @@ "type": "object" }, "notebookRuntimeTemplate": { +"deprecated": true, "description": "Output only. The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: \"projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123\"", "readOnly": true, "type": "string" @@ -35270,6 +36002,23 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1RewardModelTrainingConfig": { +"description": "Configures Reinforcement Learning to learn preference by training a reward model.", +"id": "GoogleCloudAiplatformV1beta1RewardModelTrainingConfig", +"properties": { +"epochCount": { +"description": "Optional. Number of training epoches for the reward model training job.", +"format": "int64", +"type": "string" +}, +"learningRateMultiplier": { +"description": "Optional. Learning rate multiplier for reward model training.", +"format": "double", +"type": "number" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1RougeInput": { "description": "Input for rouge metric.", "id": "GoogleCloudAiplatformV1beta1RougeInput", @@ -35390,13 +36139,12 @@ "GoogleCloudAiplatformV1beta1RuntimeConfigVertexAISearchRuntimeConfig": { "id": "GoogleCloudAiplatformV1beta1RuntimeConfigVertexAISearchRuntimeConfig", "properties": { -"appId": { -"description": "Vertex AI Search App ID. This is used to construct the search request. By setting this app_id, API will construct the serving config which is required to call search API for the user. The app_id and serving_config_name cannot both be empty at the same time.", +"engineId": { +"description": "Optional. Vertex AI Search engine ID. This is used to construct the search request. By setting this engine_id, API will construct the serving config using the default value to call search API for the user. The engine_id and serving_config_name cannot both be empty at the same time.", "type": "string" }, "servingConfigName": { -"deprecated": true, -"description": "[Deprecated] Please use app_id instead. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}`", +"description": "Optional. Vertex AI Search serving config name. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/servingConfigs/{serving_config}`", "type": "string" } }, @@ -39888,6 +40636,10 @@ "description": "Filter for searching ModelMonitoringStats.", "id": "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilter", "properties": { +"genAiStatsFilter": { +"$ref": "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterGenAiStatsFilter", +"description": "GenAi statistics filter." +}, "tabularStatsFilter": { "$ref": "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterTabularStatsFilter", "description": "Tabular statistics filter." @@ -39895,6 +40647,33 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterGenAiStatsFilter": { +"description": "GenAi statistics filter.", +"id": "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterGenAiStatsFilter", +"properties": { +"clusterId": { +"description": "From a particular cluster of monitoring results.", +"type": "string" +}, +"modelMonitoringJob": { +"description": "From a particular monitoring job.", +"type": "string" +}, +"modelMonitoringSchedule": { +"description": "From a particular monitoring schedule.", +"type": "string" +}, +"objectiveType": { +"description": "One of the supported monitoring objectives: `gen-ai-general` `gen-ai-evaluation` `gen-ai-safety`", +"type": "string" +}, +"statsName": { +"description": "If not specified, will return all the stats_names.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterTabularStatsFilter": { "description": "Tabular statistics filter.", "id": "GoogleCloudAiplatformV1beta1SearchModelMonitoringStatsFilterTabularStatsFilter", @@ -40251,6 +41030,35 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1StructFieldValue": { +"description": "One field of a Struct (or object) type feature value.", +"id": "GoogleCloudAiplatformV1beta1StructFieldValue", +"properties": { +"name": { +"description": "Name of the field in the struct feature.", +"type": "string" +}, +"value": { +"$ref": "GoogleCloudAiplatformV1beta1FeatureValue", +"description": "The value for this field." +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1StructValue": { +"description": "Struct (or object) type feature value.", +"id": "GoogleCloudAiplatformV1beta1StructValue", +"properties": { +"values": { +"description": "A list of field values.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1StructFieldValue" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1Study": { "description": "A message representing a Study.", "id": "GoogleCloudAiplatformV1beta1Study", @@ -41888,6 +42696,10 @@ }, "type": "array" }, +"googleSearchRetrieval": { +"$ref": "GoogleCloudAiplatformV1beta1GoogleSearchRetrieval", +"description": "Optional. GoogleSearchRetrieval tool type. Specialized retrieval tool that is powered by Google search." +}, "retrieval": { "$ref": "GoogleCloudAiplatformV1beta1Retrieval", "description": "Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation." @@ -42511,6 +43323,14 @@ "description": "The tuning data statistic values for TuningJob.", "id": "GoogleCloudAiplatformV1beta1TuningDataStats", "properties": { +"distillationDataStats": { +"$ref": "GoogleCloudAiplatformV1beta1DistillationDataStats", +"description": "Statistics for distillation." +}, +"reinforcementLearningDataStats": { +"$ref": "GoogleCloudAiplatformV1beta1ReinforcementLearningDataStats", +"description": "Statistics for reinforcement learning." +}, "supervisedTuningDataStats": { "$ref": "GoogleCloudAiplatformV1beta1SupervisedTuningDataStats", "description": "The SFT Tuning data stats." @@ -42536,6 +43356,10 @@ "description": "Optional. The description of the TuningJob.", "type": "string" }, +"distillationSpec": { +"$ref": "GoogleCloudAiplatformV1beta1DistillationSpec", +"description": "Tuning Spec for Distillation." +}, "encryptionSpec": { "$ref": "GoogleCloudAiplatformV1beta1EncryptionSpec", "description": "Customer-managed encryption key options for a TuningJob. If this is set, then all resources created by the TuningJob will be encrypted with the provided encryption key." @@ -42568,6 +43392,15 @@ "readOnly": true, "type": "string" }, +"pipelineJob": { +"description": "Output only. The resource name of the PipelineJob associated with the TuningJob. Format: `projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`.", +"readOnly": true, +"type": "string" +}, +"reinforcementLearningSpec": { +"$ref": "GoogleCloudAiplatformV1beta1ReinforcementLearningSpec", +"description": "Tuning Spec for Reinforcement Learning." +}, "startTime": { "description": "Output only. Time when the TuningJob for the first time entered the `JOB_STATE_RUNNING` state.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json b/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json index dbef524dd55..dccca905844 100644 --- a/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json @@ -423,7 +423,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://alertcenter.googleapis.com/", "schemas": { "AbuseDetected": { diff --git a/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json b/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json index f412afd391c..c222c65f800 100644 --- a/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json @@ -3016,6 +3016,35 @@ } } }, +"eventEditRules": { +"methods": { +"reorder": { +"description": "Changes the processing order of event edit rules on the specified stream.", +"flatPath": "v1alpha/properties/{propertiesId}/dataStreams/{dataStreamsId}/eventEditRules:reorder", +"httpMethod": "POST", +"id": "analyticsadmin.properties.dataStreams.eventEditRules.reorder", +"parameterOrder": [ +"parent" +], +"parameters": { +"parent": { +"description": "Required. Example format: properties/123/dataStreams/456", +"location": "path", +"pattern": "^properties/[^/]+/dataStreams/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/eventEditRules:reorder", +"request": { +"$ref": "GoogleAnalyticsAdminV1alphaReorderEventEditRulesRequest" +}, +"response": { +"$ref": "GoogleProtobufEmpty" +} +} +} +}, "measurementProtocolSecrets": { "methods": { "create": { @@ -4617,7 +4646,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://analyticsadmin.googleapis.com/", "schemas": { "GoogleAnalyticsAdminV1alphaAccessBetweenFilter": { @@ -8206,6 +8235,20 @@ }, "type": "object" }, +"GoogleAnalyticsAdminV1alphaReorderEventEditRulesRequest": { +"description": "Request message for ReorderEventEditRules RPC.", +"id": "GoogleAnalyticsAdminV1alphaReorderEventEditRulesRequest", +"properties": { +"eventEditRules": { +"description": "Required. EventEditRule resource names for the specified data stream, in the needed processing order. All EventEditRules for the stream must be present in the list.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleAnalyticsAdminV1alphaRollupPropertySourceLink": { "description": "A link that references a source property under the parent rollup property.", "id": "GoogleAnalyticsAdminV1alphaRollupPropertySourceLink", diff --git a/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json b/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json index 7f054f70b7e..cef3f579101 100644 --- a/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json +++ b/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json @@ -1253,6 +1253,35 @@ } }, "resources": { +"eventEditRules": { +"methods": { +"reorder": { +"description": "Changes the processing order of event edit rules on the specified stream.", +"flatPath": "v1beta/properties/{propertiesId}/dataStreams/{dataStreamsId}/eventEditRules:reorder", +"httpMethod": "POST", +"id": "analyticsadmin.properties.dataStreams.eventEditRules.reorder", +"parameterOrder": [ +"parent" +], +"parameters": { +"parent": { +"description": "Required. Example format: properties/123/dataStreams/456", +"location": "path", +"pattern": "^properties/[^/]+/dataStreams/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/eventEditRules:reorder", +"request": { +"$ref": "GoogleAnalyticsAdminV1betaReorderEventEditRulesRequest" +}, +"response": { +"$ref": "GoogleProtobufEmpty" +} +} +} +}, "measurementProtocolSecrets": { "methods": { "create": { @@ -1788,7 +1817,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://analyticsadmin.googleapis.com/", "schemas": { "GoogleAnalyticsAdminV1betaAccessBetweenFilter": { @@ -3323,6 +3352,20 @@ }, "type": "object" }, +"GoogleAnalyticsAdminV1betaReorderEventEditRulesRequest": { +"description": "Request message for ReorderEventEditRules RPC.", +"id": "GoogleAnalyticsAdminV1betaReorderEventEditRulesRequest", +"properties": { +"eventEditRules": { +"description": "Required. EventEditRule resource names for the specified data stream, in the needed processing order. All EventEditRules for the stream must be present in the list.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleAnalyticsAdminV1betaRunAccessReportRequest": { "description": "The request for a Data Access Record Report.", "id": "GoogleAnalyticsAdminV1betaRunAccessReportRequest", diff --git a/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json b/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json index 0f2a650d61e..bd556a91e0b 100644 --- a/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json +++ b/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json @@ -440,7 +440,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://analyticsdata.googleapis.com/", "schemas": { "ActiveMetricRestriction": { diff --git a/googleapiclient/discovery_cache/documents/analyticshub.v1.json b/googleapiclient/discovery_cache/documents/analyticshub.v1.json index 7746316aed2..eb3a515a405 100644 --- a/googleapiclient/discovery_cache/documents/analyticshub.v1.json +++ b/googleapiclient/discovery_cache/documents/analyticshub.v1.json @@ -1022,7 +1022,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://analyticshub.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json b/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json index c94260e685a..6bdfa29927a 100644 --- a/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json @@ -695,7 +695,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://analyticshub.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json b/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json index 2b8d513a359..945b8d38666 100644 --- a/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json +++ b/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json @@ -851,7 +851,7 @@ } } }, -"revision": "20240524", +"revision": "20240529", "rootUrl": "https://androiddeviceprovisioning.googleapis.com/", "schemas": { "ClaimDeviceRequest": { diff --git a/googleapiclient/discovery_cache/documents/androidenterprise.v1.json b/googleapiclient/discovery_cache/documents/androidenterprise.v1.json index 1540b2dc4ce..b6c869481d6 100644 --- a/googleapiclient/discovery_cache/documents/androidenterprise.v1.json +++ b/googleapiclient/discovery_cache/documents/androidenterprise.v1.json @@ -2649,7 +2649,7 @@ } } }, -"revision": "20240523", +"revision": "20240530", "rootUrl": "https://androidenterprise.googleapis.com/", "schemas": { "Administrator": { diff --git a/googleapiclient/discovery_cache/documents/androidmanagement.v1.json b/googleapiclient/discovery_cache/documents/androidmanagement.v1.json index 36efbff7a62..4fc15cd29c6 100644 --- a/googleapiclient/discovery_cache/documents/androidmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/androidmanagement.v1.json @@ -1168,7 +1168,7 @@ } } }, -"revision": "20240516", +"revision": "20240531", "rootUrl": "https://androidmanagement.googleapis.com/", "schemas": { "AdbShellCommandEvent": { @@ -1811,6 +1811,20 @@ }, "type": "array" }, +"userControlSettings": { +"description": "Optional. Specifies whether user control is permitted for the app. User control includes user actions like force-stopping and clearing app data. Supported on Android 11 and above.", +"enum": [ +"USER_CONTROL_SETTINGS_UNSPECIFIED", +"USER_CONTROL_ALLOWED", +"USER_CONTROL_DISALLOWED" +], +"enumDescriptions": [ +"Uses the default behaviour of the app to determine if user control is allowed or disallowed. For most apps, user control is allowed by default, but for some critical apps such as companion apps (extensionConfig set to true), kiosk apps and other critical system apps, user control is disallowed.", +"User control is allowed for the app. Kiosk apps can use this to allow user control.", +"User control is disallowed for the app. API_LEVEL is reported if the Android version is less than 11." +], +"type": "string" +}, "workProfileWidgets": { "description": "Specifies whether the app installed in the work profile is allowed to add widgets to the home screen.", "enum": [ @@ -3012,12 +3026,14 @@ "enum": [ "ALLOW_PERSONAL_USAGE_UNSPECIFIED", "PERSONAL_USAGE_ALLOWED", -"PERSONAL_USAGE_DISALLOWED" +"PERSONAL_USAGE_DISALLOWED", +"PERSONAL_USAGE_DISALLOWED_USERLESS" ], "enumDescriptions": [ "Personal usage restriction is not specified", "Personal usage is allowed", -"Personal usage is disallowed" +"Personal usage is disallowed", +"Device is not associated with a single user, and thus both personal usage and corporate identity authentication are not expected." ], "type": "string" }, @@ -3107,6 +3123,10 @@ false "description": "The name of the enterprise displayed to users. This field has a maximum length of 100 characters.", "type": "string" }, +"googleAuthenticationSettings": { +"$ref": "GoogleAuthenticationSettings", +"description": "Settings for Google-provided user authentication." +}, "logo": { "$ref": "ExternalData", "description": "An image displayed as a logo during device provisioning. Supported types are: image/bmp, image/gif, image/x-ico, image/jpeg, image/png, image/webp, image/vnd.wap.wbmp, image/x-adobe-dng." @@ -3211,6 +3231,28 @@ false }, "type": "object" }, +"GoogleAuthenticationSettings": { +"description": "Contains settings for Google-provided user authentication.", +"id": "GoogleAuthenticationSettings", +"properties": { +"googleAuthenticationRequired": { +"description": "Output only. Whether users need to be authenticated by Google during the enrollment process. IT admin can specify if Google authentication is enabled for the enterprise for knowledge worker devices. This value can be set only via the Google Admin Console. Google authentication can be used with signin_url In the case where Google authentication is required and a signin_url is specified, Google authentication will be launched before signin_url.", +"enum": [ +"GOOGLE_AUTHENTICATION_REQUIRED_UNSPECIFIED", +"NOT_REQUIRED", +"REQUIRED" +], +"enumDescriptions": [ +"This value is not used.", +"Google authentication is not required.", +"User is required to be successfully authenticated by Google." +], +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, "HardwareInfo": { "description": "Information about device hardware. The fields related to temperature thresholds are only available if hardwareStatusEnabled is true in the device's policy.", "id": "HardwareInfo", @@ -5621,12 +5663,14 @@ false "enum": [ "ALLOW_PERSONAL_USAGE_UNSPECIFIED", "PERSONAL_USAGE_ALLOWED", -"PERSONAL_USAGE_DISALLOWED" +"PERSONAL_USAGE_DISALLOWED", +"PERSONAL_USAGE_DISALLOWED_USERLESS" ], "enumDescriptions": [ "Personal usage restriction is not specified", "Personal usage is allowed", -"Personal usage is disallowed" +"Personal usage is disallowed", +"Device is not associated with a single user, and thus both personal usage and corporate identity authentication are not expected." ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/androidpublisher.v3.json b/googleapiclient/discovery_cache/documents/androidpublisher.v3.json index 10ee0afb95f..978608e6fee 100644 --- a/googleapiclient/discovery_cache/documents/androidpublisher.v3.json +++ b/googleapiclient/discovery_cache/documents/androidpublisher.v3.json @@ -4731,7 +4731,7 @@ } } }, -"revision": "20240522", +"revision": "20240530", "rootUrl": "https://androidpublisher.googleapis.com/", "schemas": { "Abi": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1.json b/googleapiclient/discovery_cache/documents/appengine.v1.json index 8b51b837f64..81d60c7bde3 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1.json @@ -1718,7 +1718,7 @@ } } }, -"revision": "20240513", +"revision": "20240527", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "ApiConfigHandler": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1alpha.json b/googleapiclient/discovery_cache/documents/appengine.v1alpha.json index 1137f9d3141..fb5dded963d 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1alpha.json @@ -946,7 +946,7 @@ } } }, -"revision": "20240513", +"revision": "20240527", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "AuthorizedCertificate": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1beta.json b/googleapiclient/discovery_cache/documents/appengine.v1beta.json index fa121978f6b..b51882bd1df 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1beta.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1beta.json @@ -1918,7 +1918,7 @@ } } }, -"revision": "20240513", +"revision": "20240527", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "ApiConfigHandler": { diff --git a/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json b/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json index 13f6763641b..b9ccda6231e 100644 --- a/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json @@ -586,7 +586,7 @@ } } }, -"revision": "20240526", +"revision": "20240530", "rootUrl": "https://area120tables.googleapis.com/", "schemas": { "BatchCreateRowsRequest": { diff --git a/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json b/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json index 8f11efbe3ee..f935158aba2 100644 --- a/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json +++ b/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json @@ -1367,7 +1367,7 @@ } } }, -"revision": "20240523", +"revision": "20240603", "rootUrl": "https://authorizedbuyersmarketplace.googleapis.com/", "schemas": { "AcceptProposalRequest": { diff --git a/googleapiclient/discovery_cache/documents/backupdr.v1.json b/googleapiclient/discovery_cache/documents/backupdr.v1.json index 091d0b9c8ac..4909711e183 100644 --- a/googleapiclient/discovery_cache/documents/backupdr.v1.json +++ b/googleapiclient/discovery_cache/documents/backupdr.v1.json @@ -535,7 +535,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://backupdr.googleapis.com/", "schemas": { "AuditConfig": { @@ -793,6 +793,16 @@ "readOnly": true, "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "state": { "description": "Output only. The ManagementServer state.", "enum": [ diff --git a/googleapiclient/discovery_cache/documents/batch.v1.json b/googleapiclient/discovery_cache/documents/batch.v1.json index f168831c445..a55ca84de83 100644 --- a/googleapiclient/discovery_cache/documents/batch.v1.json +++ b/googleapiclient/discovery_cache/documents/batch.v1.json @@ -561,7 +561,7 @@ } } }, -"revision": "20240517", +"revision": "20240523", "rootUrl": "https://batch.googleapis.com/", "schemas": { "Accelerator": { diff --git a/googleapiclient/discovery_cache/documents/biglake.v1.json b/googleapiclient/discovery_cache/documents/biglake.v1.json index e36b8a4c27d..27451a2e001 100644 --- a/googleapiclient/discovery_cache/documents/biglake.v1.json +++ b/googleapiclient/discovery_cache/documents/biglake.v1.json @@ -616,7 +616,7 @@ } } }, -"revision": "20240522", +"revision": "20240529", "rootUrl": "https://biglake.googleapis.com/", "schemas": { "Catalog": { diff --git a/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json b/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json index 55f3ba73514..550d1393d6b 100644 --- a/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json +++ b/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json @@ -395,7 +395,7 @@ } } }, -"revision": "20240513", +"revision": "20240520", "rootUrl": "https://bigquerydatapolicy.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json b/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json index 57fac09de49..32bf3ce9222 100644 --- a/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json +++ b/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json @@ -2194,7 +2194,7 @@ } } }, -"revision": "20240517", +"revision": "20240520", "rootUrl": "https://bigtableadmin.googleapis.com/", "schemas": { "AppProfile": { diff --git a/googleapiclient/discovery_cache/documents/billingbudgets.v1.json b/googleapiclient/discovery_cache/documents/billingbudgets.v1.json index c11e31a0fbc..fbebf3c3698 100644 --- a/googleapiclient/discovery_cache/documents/billingbudgets.v1.json +++ b/googleapiclient/discovery_cache/documents/billingbudgets.v1.json @@ -275,7 +275,7 @@ } } }, -"revision": "20240519", +"revision": "20240602", "rootUrl": "https://billingbudgets.googleapis.com/", "schemas": { "GoogleCloudBillingBudgetsV1Budget": { diff --git a/googleapiclient/discovery_cache/documents/billingbudgets.v1beta1.json b/googleapiclient/discovery_cache/documents/billingbudgets.v1beta1.json index 1b0f0ea634e..08e8841d6e1 100644 --- a/googleapiclient/discovery_cache/documents/billingbudgets.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/billingbudgets.v1beta1.json @@ -269,7 +269,7 @@ } } }, -"revision": "20240519", +"revision": "20240602", "rootUrl": "https://billingbudgets.googleapis.com/", "schemas": { "GoogleCloudBillingBudgetsV1beta1AllUpdatesRule": { diff --git a/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json b/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json index d4edbd2c958..748f1e7a822 100644 --- a/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json +++ b/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json @@ -742,7 +742,7 @@ } } }, -"revision": "20240517", +"revision": "20240531", "rootUrl": "https://binaryauthorization.googleapis.com/", "schemas": { "AdmissionRule": { @@ -1667,7 +1667,7 @@ "type": "array" }, "containerAnalysisAttestationProjects": { -"description": "Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10.", +"description": "Optional. The projects where attestations are stored as Container Analysis Occurrences, in the format `projects/[PROJECT_ID]`. Only one attestation needs to successfully verify an image for this check to pass, so a single verified attestation found in any of `container_analysis_attestation_projects` is sufficient for the check to pass. A project ID must be used, not a project number. When fetching Occurrences from Container Analysis, only `AttestationOccurrence` kinds are considered. In the future, additional Occurrence kinds may be added to the query. Maximum number of `container_analysis_attestation_projects` allowed in each `SimpleSigningAttestationCheck` is 10.", "items": { "type": "string" }, @@ -1742,7 +1742,7 @@ "type": "string" }, "noteReference": { -"description": "Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/*/notes/*`. This field may not be updated. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency.", +"description": "Required. The Grafeas resource name of a Attestation.Authority Note, created by the user, in the format: `projects/[PROJECT_ID]/notes/*`. This field may not be updated. A project ID must be used, not a project number. An attestation by this attestor is stored as a Grafeas Attestation.Authority Occurrence that names a container image and that links to this Note. Grafeas is an external dependency.", "type": "string" }, "publicKeys": { diff --git a/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json b/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json index 4bcea74fec4..6445a4b4a4a 100644 --- a/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json @@ -551,7 +551,7 @@ } } }, -"revision": "20240517", +"revision": "20240531", "rootUrl": "https://binaryauthorization.googleapis.com/", "schemas": { "AdmissionRule": { diff --git a/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json b/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json index b1145b24e17..49e2975e0df 100644 --- a/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json +++ b/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json @@ -487,7 +487,7 @@ } } }, -"revision": "20240515", +"revision": "20240529", "rootUrl": "https://blockchainnodeengine.googleapis.com/", "schemas": { "BlockchainNode": { diff --git a/googleapiclient/discovery_cache/documents/blogger.v2.json b/googleapiclient/discovery_cache/documents/blogger.v2.json index 3f93f2c0f34..69266f8c643 100644 --- a/googleapiclient/discovery_cache/documents/blogger.v2.json +++ b/googleapiclient/discovery_cache/documents/blogger.v2.json @@ -401,7 +401,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://blogger.googleapis.com/", "schemas": { "Blog": { diff --git a/googleapiclient/discovery_cache/documents/blogger.v3.json b/googleapiclient/discovery_cache/documents/blogger.v3.json index 394c0413f5a..dba77240baf 100644 --- a/googleapiclient/discovery_cache/documents/blogger.v3.json +++ b/googleapiclient/discovery_cache/documents/blogger.v3.json @@ -1710,7 +1710,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://blogger.googleapis.com/", "schemas": { "Blog": { diff --git a/googleapiclient/discovery_cache/documents/books.v1.json b/googleapiclient/discovery_cache/documents/books.v1.json index 7efc7a9210b..3be57320135 100644 --- a/googleapiclient/discovery_cache/documents/books.v1.json +++ b/googleapiclient/discovery_cache/documents/books.v1.json @@ -2677,7 +2677,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://books.googleapis.com/", "schemas": { "Annotation": { diff --git a/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json b/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json index e7725686655..b1f4e4a280e 100644 --- a/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json +++ b/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json @@ -417,7 +417,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://businessprofileperformance.googleapis.com/", "schemas": { "DailyMetricTimeSeries": { diff --git a/googleapiclient/discovery_cache/documents/calendar.v3.json b/googleapiclient/discovery_cache/documents/calendar.v3.json index 4cca6760e9b..46a01fef30b 100644 --- a/googleapiclient/discovery_cache/documents/calendar.v3.json +++ b/googleapiclient/discovery_cache/documents/calendar.v3.json @@ -1092,12 +1092,14 @@ "enum": [ "default", "focusTime", +"fromGmail", "outOfOffice", "workingLocation" ], "enumDescriptions": [ "Regular events.", "Focus time events.", +"Events from Gmail.", "Out of office events.", "Working location events." ], @@ -1217,7 +1219,7 @@ "supportsSubscription": true }, "move": { -"description": "Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime and workingLocation events cannot be moved.", +"description": "Moves an event to another calendar, i.e. changes an event's organizer. Note that only default events can be moved; outOfOffice, focusTime, workingLocation and fromGmail events cannot be moved.", "httpMethod": "POST", "id": "calendar.events.move", "parameterOrder": [ @@ -1507,12 +1509,14 @@ "enum": [ "default", "focusTime", +"fromGmail", "outOfOffice", "workingLocation" ], "enumDescriptions": [ "Regular events.", "Focus time events.", +"Events from Gmail.", "Out of office events.", "Working location events." ], @@ -1759,7 +1763,7 @@ } } }, -"revision": "20240425", +"revision": "20240523", "rootUrl": "https://www.googleapis.com/", "schemas": { "Acl": { @@ -2416,7 +2420,7 @@ }, "eventType": { "default": "default", -"description": "Specific type of the event. This cannot be modified after the event is created. Possible values are: \n- \"default\" - A regular event or not further specified. \n- \"outOfOffice\" - An out-of-office event. \n- \"focusTime\" - A focus-time event. \n- \"workingLocation\" - A working location event.", +"description": "Specific type of the event. This cannot be modified after the event is created. Possible values are: \n- \"default\" - A regular event or not further specified. \n- \"outOfOffice\" - An out-of-office event. \n- \"focusTime\" - A focus-time event. \n- \"workingLocation\" - A working location event. \n- \"fromGmail\" - An event from Gmail. This type of event cannot be created.", "type": "string" }, "extendedProperties": { diff --git a/googleapiclient/discovery_cache/documents/checks.v1alpha.json b/googleapiclient/discovery_cache/documents/checks.v1alpha.json index 30fdaa2fcb5..a16bd3a7365 100644 --- a/googleapiclient/discovery_cache/documents/checks.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/checks.v1alpha.json @@ -414,7 +414,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://checks.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/chromemanagement.v1.json b/googleapiclient/discovery_cache/documents/chromemanagement.v1.json index 543d8b50e48..984a5f9ae24 100644 --- a/googleapiclient/discovery_cache/documents/chromemanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/chromemanagement.v1.json @@ -1172,7 +1172,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://chromemanagement.googleapis.com/", "schemas": { "GoogleChromeManagementV1AndroidAppInfo": { @@ -1320,6 +1320,85 @@ }, "type": "object" }, +"GoogleChromeManagementV1AppReport": { +"description": "App report.", +"id": "GoogleChromeManagementV1AppReport", +"properties": { +"reportTime": { +"description": "Timestamp when the report was collected.", +"format": "google-datetime", +"type": "string" +}, +"usageData": { +"description": "App usage data.", +"items": { +"$ref": "GoogleChromeManagementV1AppUsageData" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleChromeManagementV1AppUsageData": { +"description": "App usage data.", +"id": "GoogleChromeManagementV1AppUsageData", +"properties": { +"appId": { +"description": "App id.", +"type": "string" +}, +"appInstanceId": { +"description": "Application instance id. This will be unique per window/instance.", +"type": "string" +}, +"appType": { +"description": "Type of app.", +"enum": [ +"TELEMETRY_APPLICATION_TYPE_UNSPECIFIED", +"APPLICATION_TYPE_ARC", +"APPLICATION_TYPE_BUILT_IN", +"APPLICATION_TYPE_CROSTINI", +"APPLICATION_TYPE_CHROME_APP", +"APPLICATION_TYPE_WEB", +"APPLICATION_TYPE_MAC_OS", +"APPLICATION_TYPE_PLUGIN_VM", +"APPLICATION_TYPE_STANDALONE_BROWSER", +"APPLICATION_TYPE_REMOTE", +"APPLICATION_TYPE_BOREALIS", +"APPLICATION_TYPE_SYSTEM_WEB", +"APPLICATION_TYPE_STANDALONE_BROWSER_CHROME_APP", +"APPLICATION_TYPE_EXTENSION", +"APPLICATION_TYPE_STANDALONE_BROWSER_EXTENSION", +"APPLICATION_TYPE_BRUSCHETTA" +], +"enumDescriptions": [ +"Application type unknown.", +"Application type arc (Android app).", +"Application type built-in.", +"Application type Linux (via Crostini).", +"Application type Chrome app.", +"Application type web.", +"Application type Mac OS.", +"Application type Plugin VM.", +"Application type standalone browser (Lacros browser app).", +"Application type remote.", +"Application type borealis.", +"Application type system web.", +"Application type standalone browser chrome app (hosted in Lacros).", +"Application type extension.", +"Application type standalone browser extension.", +"Application type bruschetta." +], +"type": "string" +}, +"runningDuration": { +"description": "App foreground running time.", +"format": "google-duration", +"type": "string" +} +}, +"type": "object" +}, "GoogleChromeManagementV1AudioStatusReport": { "description": "Status data for storage. * This field is telemetry information and this will change over time as the device is utilized. * Data for this field is controlled via policy: [ReportDeviceAudioStatus](https://chromeenterprise.google/policies/#ReportDeviceAudioStatus) * Data Collection Frequency: 10 minutes * Default Data Reporting Frequency: 3 hours - Policy Controlled: Yes * Cache: If the device is offline, the collected data is stored locally, and will be reported when the device is next online: No * Reported for affiliated users only: N/A * Granular permission needed: TELEMETRY_API_AUDIO_REPORT", "id": "GoogleChromeManagementV1AudioStatusReport", @@ -3771,6 +3850,14 @@ "description": "Telemetry data collected from a managed device. * Granular permission needed: TELEMETRY_API_DEVICE", "id": "GoogleChromeManagementV1TelemetryDevice", "properties": { +"appReport": { +"description": "Output only. App reports collected periodically sorted in a decreasing order of report_time.", +"items": { +"$ref": "GoogleChromeManagementV1AppReport" +}, +"readOnly": true, +"type": "array" +}, "audioStatusReport": { "description": "Output only. Audio reports collected periodically sorted in a decreasing order of report_time.", "items": { @@ -4302,6 +4389,14 @@ "description": "Telemetry data collected for a managed user and device. * Granular permission needed: TELEMETRY_API_DEVICE", "id": "GoogleChromeManagementV1TelemetryUserDevice", "properties": { +"appReport": { +"description": "Output only. App reports collected periodically sorted in a decreasing order of report_time.", +"items": { +"$ref": "GoogleChromeManagementV1AppReport" +}, +"readOnly": true, +"type": "array" +}, "audioStatusReport": { "description": "Output only. Audio reports collected periodically sorted in a decreasing order of report_time.", "items": { diff --git a/googleapiclient/discovery_cache/documents/chromepolicy.v1.json b/googleapiclient/discovery_cache/documents/chromepolicy.v1.json index 818238d3319..1cb2911b636 100644 --- a/googleapiclient/discovery_cache/documents/chromepolicy.v1.json +++ b/googleapiclient/discovery_cache/documents/chromepolicy.v1.json @@ -557,7 +557,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://chromepolicy.googleapis.com/", "schemas": { "GoogleChromePolicyVersionsV1AdditionalTargetKeyName": { diff --git a/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json b/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json index 63a827cfccf..1d01c388001 100644 --- a/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json +++ b/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json @@ -131,7 +131,7 @@ } } }, -"revision": "20240523", +"revision": "20240528", "rootUrl": "https://chromeuxreport.googleapis.com/", "schemas": { "Bin": { diff --git a/googleapiclient/discovery_cache/documents/civicinfo.v2.json b/googleapiclient/discovery_cache/documents/civicinfo.v2.json index 5be11604a31..898ad7fca62 100644 --- a/googleapiclient/discovery_cache/documents/civicinfo.v2.json +++ b/googleapiclient/discovery_cache/documents/civicinfo.v2.json @@ -365,7 +365,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://civicinfo.googleapis.com/", "schemas": { "AdministrationRegion": { diff --git a/googleapiclient/discovery_cache/documents/classroom.v1.json b/googleapiclient/discovery_cache/documents/classroom.v1.json index 2fb73a45a22..d3b8f853d62 100644 --- a/googleapiclient/discovery_cache/documents/classroom.v1.json +++ b/googleapiclient/discovery_cache/documents/classroom.v1.json @@ -2400,7 +2400,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://classroom.googleapis.com/", "schemas": { "Announcement": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1.json index 97748170977..2770523fa47 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1.json @@ -1095,7 +1095,7 @@ } } }, -"revision": "20240525", +"revision": "20240530", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AccessSelector": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json index c02577d06cf..7833d55276e 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json @@ -411,7 +411,7 @@ } } }, -"revision": "20240525", +"revision": "20240530", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json index a420f584e04..94221c2914c 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json @@ -207,7 +207,7 @@ } } }, -"revision": "20240525", +"revision": "20240530", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json index 918bd38474e..f522b5855a3 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json @@ -177,7 +177,7 @@ } } }, -"revision": "20240525", +"revision": "20240530", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json index 502c849ec39..dc932f903fa 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json @@ -167,7 +167,7 @@ } } }, -"revision": "20240525", +"revision": "20240530", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudbilling.v1.json b/googleapiclient/discovery_cache/documents/cloudbilling.v1.json index 9058d6a85ce..9c59a16c230 100644 --- a/googleapiclient/discovery_cache/documents/cloudbilling.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudbilling.v1.json @@ -751,7 +751,7 @@ } } }, -"revision": "20240517", +"revision": "20240531", "rootUrl": "https://cloudbilling.googleapis.com/", "schemas": { "AggregationInfo": { diff --git a/googleapiclient/discovery_cache/documents/cloudbilling.v1beta.json b/googleapiclient/discovery_cache/documents/cloudbilling.v1beta.json index 5d06974c430..d9e95fe2e5e 100644 --- a/googleapiclient/discovery_cache/documents/cloudbilling.v1beta.json +++ b/googleapiclient/discovery_cache/documents/cloudbilling.v1beta.json @@ -114,6 +114,7 @@ "billingAccounts": { "methods": { "estimateCostScenario": { +"deprecated": true, "description": "Use custom pricing in the estimate, using a `CostScenario` with a defined `billingAccount`.", "flatPath": "v1beta/billingAccounts/{billingAccountsId}:estimateCostScenario", "httpMethod": "POST", @@ -734,8 +735,10 @@ } }, "v1beta": { +"deprecated": true, "methods": { "estimateCostScenario": { +"deprecated": true, "description": "Estimate list prices using a `CostScenario` without a defined `billingAccount`.", "flatPath": "v1beta:estimateCostScenario", "httpMethod": "POST", @@ -758,7 +761,7 @@ } } }, -"revision": "20240517", +"revision": "20240531", "rootUrl": "https://cloudbilling.googleapis.com/", "schemas": { "CacheFillRegions": { diff --git a/googleapiclient/discovery_cache/documents/cloudbuild.v1.json b/googleapiclient/discovery_cache/documents/cloudbuild.v1.json index b4644f55138..537f90b6cc9 100644 --- a/googleapiclient/discovery_cache/documents/cloudbuild.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudbuild.v1.json @@ -2346,7 +2346,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://cloudbuild.googleapis.com/", "schemas": { "ApprovalConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudbuild.v2.json b/googleapiclient/discovery_cache/documents/cloudbuild.v2.json index ff971279e15..d4307f76917 100644 --- a/googleapiclient/discovery_cache/documents/cloudbuild.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudbuild.v2.json @@ -844,7 +844,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://cloudbuild.googleapis.com/", "schemas": { "AuditConfig": { @@ -977,7 +977,7 @@ "description": "Required. A http access token with the `REPO_ADMIN` scope access." }, "hostUri": { -"description": "Optional. The URI of the Bitbucket Data Center instance or cluster this connection is for.", +"description": "Required. The URI of the Bitbucket Data Center instance or cluster this connection is for.", "type": "string" }, "readAuthorizerCredential": { diff --git a/googleapiclient/discovery_cache/documents/cloudchannel.v1.json b/googleapiclient/discovery_cache/documents/cloudchannel.v1.json index a5ae6a8b611..ce33c82c583 100644 --- a/googleapiclient/discovery_cache/documents/cloudchannel.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudchannel.v1.json @@ -2183,7 +2183,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://cloudchannel.googleapis.com/", "schemas": { "GoogleCloudChannelV1ActivateEntitlementRequest": { diff --git a/googleapiclient/discovery_cache/documents/clouddeploy.v1.json b/googleapiclient/discovery_cache/documents/clouddeploy.v1.json index 92f3f102803..2c972c2d60d 100644 --- a/googleapiclient/discovery_cache/documents/clouddeploy.v1.json +++ b/googleapiclient/discovery_cache/documents/clouddeploy.v1.json @@ -2065,7 +2065,7 @@ } } }, -"revision": "20240511", +"revision": "20240515", "rootUrl": "https://clouddeploy.googleapis.com/", "schemas": { "AbandonReleaseRequest": { diff --git a/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json b/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json index ad52530dff1..f8fe2d4330e 100644 --- a/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json @@ -431,7 +431,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://clouderrorreporting.googleapis.com/", "schemas": { "DeleteEventsResponse": { diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json index 88ae6492f51..088862a04b9 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json @@ -552,7 +552,7 @@ } } }, -"revision": "20240502", +"revision": "20240523", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AuditConfig": { @@ -1716,7 +1716,7 @@ "id": "OnDeployUpdatePolicy", "properties": { "runtimeVersion": { -"description": "Output only. contains the runtime version which was used during latest function deployment.", +"description": "Output only. Contains the runtime version which was used during latest function deployment.", "readOnly": true, "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json index 217abefd5b3..a9f36b7afd5 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240502", +"revision": "20240523", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { @@ -2201,6 +2201,10 @@ "description": "The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description.", "type": "string" }, +"binaryAuthorizationPolicy": { +"description": "Optional. The binary authorization policy to be checked when deploying the Cloud Run service.", +"type": "string" +}, "environmentVariables": { "additionalProperties": { "type": "string" @@ -2415,6 +2419,10 @@ "object": { "description": "Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build.", "type": "string" +}, +"sourceUploadUrl": { +"description": "When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment.", +"type": "string" } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json index 28daed8c00d..11dc4fde05e 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240502", +"revision": "20240523", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { @@ -2201,6 +2201,10 @@ "description": "The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description.", "type": "string" }, +"binaryAuthorizationPolicy": { +"description": "Optional. The binary authorization policy to be checked when deploying the Cloud Run service.", +"type": "string" +}, "environmentVariables": { "additionalProperties": { "type": "string" @@ -2415,6 +2419,10 @@ "object": { "description": "Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build.", "type": "string" +}, +"sourceUploadUrl": { +"description": "When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment.", +"type": "string" } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json index 774cf346fa1..14b9f1c4e18 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240502", +"revision": "20240523", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { @@ -2201,6 +2201,10 @@ "description": "The amount of memory available for a function. Defaults to 256M. Supported units are k, M, G, Mi, Gi. If no unit is supplied the value is interpreted as bytes. See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go a full description.", "type": "string" }, +"binaryAuthorizationPolicy": { +"description": "Optional. The binary authorization policy to be checked when deploying the Cloud Run service.", +"type": "string" +}, "environmentVariables": { "additionalProperties": { "type": "string" @@ -2415,6 +2419,10 @@ "object": { "description": "Google Cloud Storage object containing the source. This object must be a gzipped archive file (`.tar.gz`) containing source to build.", "type": "string" +}, +"sourceUploadUrl": { +"description": "When the specified storage bucket is a 1st gen function uploard url bucket, this field should be set as the generated upload url for 1st gen deployment.", +"type": "string" } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/cloudidentity.v1.json b/googleapiclient/discovery_cache/documents/cloudidentity.v1.json index 0988f572ec4..e67221c54bd 100644 --- a/googleapiclient/discovery_cache/documents/cloudidentity.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudidentity.v1.json @@ -1990,7 +1990,7 @@ } } }, -"revision": "20240521", +"revision": "20240527", "rootUrl": "https://cloudidentity.googleapis.com/", "schemas": { "AddIdpCredentialOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json index 8cc8bd6e07e..7ae4a79756e 100644 --- a/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json @@ -2015,7 +2015,7 @@ } } }, -"revision": "20240521", +"revision": "20240527", "rootUrl": "https://cloudidentity.googleapis.com/", "schemas": { "AddIdpCredentialOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudkms.v1.json b/googleapiclient/discovery_cache/documents/cloudkms.v1.json index f38c51b713f..bf2ef9d31bd 100644 --- a/googleapiclient/discovery_cache/documents/cloudkms.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudkms.v1.json @@ -2056,7 +2056,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://cloudkms.googleapis.com/", "schemas": { "AsymmetricDecryptRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1.json b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1.json index 3a197713834..2d9ebea6856 100644 --- a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1.json @@ -1171,7 +1171,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://cloudresourcemanager.googleapis.com/", "schemas": { "Ancestor": { diff --git a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1beta1.json index b88efb78496..11534e89e54 100644 --- a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v1beta1.json @@ -568,7 +568,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://cloudresourcemanager.googleapis.com/", "schemas": { "Ancestor": { diff --git a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2.json b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2.json index 63971f02384..452edcc899c 100644 --- a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2.json @@ -450,7 +450,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://cloudresourcemanager.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2beta1.json b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2beta1.json index b735c1b30dc..b47b2c5d00d 100644 --- a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v2beta1.json @@ -450,7 +450,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://cloudresourcemanager.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v3.json b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v3.json index 3699e2013cd..7f7e74e53f5 100644 --- a/googleapiclient/discovery_cache/documents/cloudresourcemanager.v3.json +++ b/googleapiclient/discovery_cache/documents/cloudresourcemanager.v3.json @@ -1805,7 +1805,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://cloudresourcemanager.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json b/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json index 97647b96455..aadb097c79a 100644 --- a/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json @@ -418,7 +418,7 @@ } } }, -"revision": "20240419", +"revision": "20240524", "rootUrl": "https://cloudscheduler.googleapis.com/", "schemas": { "AppEngineHttpTarget": { diff --git a/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json index f0b5f777a7b..ae73f8c30cb 100644 --- a/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json @@ -433,7 +433,7 @@ } } }, -"revision": "20240419", +"revision": "20240524", "rootUrl": "https://cloudscheduler.googleapis.com/", "schemas": { "AppEngineHttpTarget": { diff --git a/googleapiclient/discovery_cache/documents/cloudsearch.v1.json b/googleapiclient/discovery_cache/documents/cloudsearch.v1.json index 922d0e69d67..ea1ccff14cc 100644 --- a/googleapiclient/discovery_cache/documents/cloudsearch.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudsearch.v1.json @@ -937,6 +937,25 @@ }, "query": { "methods": { +"debugSearch": { +"description": "Returns Debug information for Cloud Search Query API provides the search method. **Note:** This API requires a standard end user account to execute. A service account can't perform Query API requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).", +"flatPath": "v1/query:debugSearch", +"httpMethod": "POST", +"id": "cloudsearch.query.debugSearch", +"parameterOrder": [], +"parameters": {}, +"path": "v1/query:debugSearch", +"request": { +"$ref": "SearchRequest" +}, +"response": { +"$ref": "DebugResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud_search", +"https://www.googleapis.com/auth/cloud_search.query" +] +}, "removeActivity": { "description": "Provides functionality to remove logged activity for a user. Currently to be used only for Chat 1p clients **Note:** This API requires a standard end user account to execute. A service account can't perform Remove Activity requests directly; to use a service account to perform queries, set up [Google Workspace domain-wide delegation of authority](https://developers.google.com/cloud-search/docs/guides/delegation/).", "flatPath": "v1/query:removeActivity", @@ -2096,7 +2115,7 @@ } } }, -"revision": "20240501", +"revision": "20240529", "rootUrl": "https://cloudsearch.googleapis.com/", "schemas": { "Action": { @@ -2653,6 +2672,27 @@ }, "type": "object" }, +"DebugResponse": { +"description": "Debug Search Response.", +"id": "DebugResponse", +"properties": { +"gsrRequest": { +"description": "Serialized string of GenericSearchRequest.", +"format": "byte", +"type": "string" +}, +"gsrResponse": { +"description": "Serialized string of GenericSearchResponse.", +"format": "byte", +"type": "string" +}, +"searchResponse": { +"$ref": "SearchResponse", +"description": "Search response." +} +}, +"type": "object" +}, "DeleteQueueItemsRequest": { "id": "DeleteQueueItemsRequest", "properties": { diff --git a/googleapiclient/discovery_cache/documents/cloudshell.v1.json b/googleapiclient/discovery_cache/documents/cloudshell.v1.json index b9c50063c9c..71c594e9302 100644 --- a/googleapiclient/discovery_cache/documents/cloudshell.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudshell.v1.json @@ -374,7 +374,7 @@ } } }, -"revision": "20240513", +"revision": "20240603", "rootUrl": "https://cloudshell.googleapis.com/", "schemas": { "AddPublicKeyMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudsupport.v2.json b/googleapiclient/discovery_cache/documents/cloudsupport.v2.json index 2c45c7ed52c..01f2f17e575 100644 --- a/googleapiclient/discovery_cache/documents/cloudsupport.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudsupport.v2.json @@ -552,7 +552,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://cloudsupport.googleapis.com/", "schemas": { "Actor": { diff --git a/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json b/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json index a07186a8b0c..c02bfc8a388 100644 --- a/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json +++ b/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json @@ -619,7 +619,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://cloudsupport.googleapis.com/", "schemas": { "Actor": { diff --git a/googleapiclient/discovery_cache/documents/compute.alpha.json b/googleapiclient/discovery_cache/documents/compute.alpha.json index d7eebc54d8f..c3be0d14ebc 100644 --- a/googleapiclient/discovery_cache/documents/compute.alpha.json +++ b/googleapiclient/discovery_cache/documents/compute.alpha.json @@ -44423,7 +44423,7 @@ } } }, -"revision": "20240521", +"revision": "20240526", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -68460,6 +68460,10 @@ false "description": "A full or partial URL of the network placement to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkPlacements/{network_placement_name} - projects/{project_id}/global/networkPlacements/{network_placement_name} ", "type": "string" }, +"networkProfile": { +"description": "A full or partial URL of the network profile to apply to this network. This field can be set only at resource creation time. For example, the following are valid URLs: - https://www.googleapis.com/compute/alpha/projects/{project_id}/global/networkProfiles/{network_profile_name} - projects/{project_id}/global/networkProfiles/{network_profile_name} ", +"type": "string" +}, "peerings": { "description": "[Output Only] A list of network peerings for the resource.", "items": { @@ -99176,7 +99180,7 @@ false "type": "object" }, "Zone": { -"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones.", +"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones.", "id": "Zone", "properties": { "availableCpuPlatforms": { diff --git a/googleapiclient/discovery_cache/documents/compute.beta.json b/googleapiclient/discovery_cache/documents/compute.beta.json index a611c3d4217..9f179e3e4d0 100644 --- a/googleapiclient/discovery_cache/documents/compute.beta.json +++ b/googleapiclient/discovery_cache/documents/compute.beta.json @@ -18152,7 +18152,7 @@ ] }, "patch": { -"description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.", +"description": "Patches the specified network with the data included in the request. Only routingConfig can be modified.", "flatPath": "projects/{project}/global/networks/{network}", "httpMethod": "PATCH", "id": "compute.networks.patch", @@ -41579,7 +41579,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -43157,7 +43157,7 @@ false "description": "[Output Only] shielded vm initial state stored on disk" }, "source": { -"description": "Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk.", +"description": "Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk.", "type": "string" }, "type": { @@ -43292,7 +43292,7 @@ false "type": "array" }, "sourceImage": { -"description": "The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set.", +"description": "The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set.", "type": "string" }, "sourceImageEncryptionKey": { @@ -43300,11 +43300,11 @@ false "description": "The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys." }, "sourceInstantSnapshot": { -"description": "The source instant-snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set.", +"description": "The source instant-snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceInstantSnapshot initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: us-central1-a/instantSnapshots/my-backup If the source instant-snapshot is deleted later, this field will not be set.", "type": "string" }, "sourceSnapshot": { -"description": "The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set.", +"description": "The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set.", "type": "string" }, "sourceSnapshotEncryptionKey": { @@ -55415,6 +55415,14 @@ false "description": "[Output Only] The URL of the region where the managed instance group resides (for regional resources).", "type": "string" }, +"satisfiesPzi": { +"description": "[Output Only] Reserved for future use.", +"type": "boolean" +}, +"satisfiesPzs": { +"description": "[Output Only] Reserved for future use.", +"type": "boolean" +}, "selfLink": { "description": "[Output Only] The URL for this managed instance group. The server defines this URL.", "type": "string" @@ -55752,6 +55760,10 @@ false }, "description": "Named instance selections configuring properties that the group will use when creating new VMs.", "type": "object" +}, +"provisioningModelMix": { +"$ref": "InstanceGroupManagerInstanceFlexibilityPolicyProvisioningModelMix", +"description": "Provisioning model configuration used by this managed instance group to create instances." } }, "type": "object" @@ -55774,6 +55786,22 @@ false }, "type": "object" }, +"InstanceGroupManagerInstanceFlexibilityPolicyProvisioningModelMix": { +"id": "InstanceGroupManagerInstanceFlexibilityPolicyProvisioningModelMix", +"properties": { +"standardCapacityBase": { +"description": "The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity user needs. MIG will create only Standard VMs until it reaches standard_capacity_base and only then will start using standard_capacity_percent_above_base to mix Spot with Standard VMs.", +"format": "int32", +"type": "integer" +}, +"standardCapacityPercentAboveBase": { +"description": "The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, "InstanceGroupManagerInstanceLifecyclePolicy": { "id": "InstanceGroupManagerInstanceLifecyclePolicy", "properties": { @@ -63357,6 +63385,18 @@ false "machineType": { "description": "The machine type to be used for this instance.", "type": "string" +}, +"provisioningModel": { +"description": "The provisioning model to be used for this instance.", +"enum": [ +"SPOT", +"STANDARD" +], +"enumDescriptions": [ +"Heavily discounted, no guaranteed runtime.", +"Standard provisioning with user controlled runtime, no discounts." +], +"type": "string" } }, "type": "object" @@ -91013,7 +91053,7 @@ false "type": "object" }, "Zone": { -"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones.", +"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones.", "id": "Zone", "properties": { "availableCpuPlatforms": { diff --git a/googleapiclient/discovery_cache/documents/compute.v1.json b/googleapiclient/discovery_cache/documents/compute.v1.json index d3fc261671e..5f512e2d319 100644 --- a/googleapiclient/discovery_cache/documents/compute.v1.json +++ b/googleapiclient/discovery_cache/documents/compute.v1.json @@ -16605,7 +16605,7 @@ ] }, "patch": { -"description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.", +"description": "Patches the specified network with the data included in the request. Only routingConfig can be modified.", "flatPath": "projects/{project}/global/networks/{network}", "httpMethod": "PATCH", "id": "compute.networks.patch", @@ -37421,7 +37421,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -38976,7 +38976,7 @@ false "description": "[Output Only] shielded vm initial state stored on disk" }, "source": { -"description": "Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk.", +"description": "Specifies a valid partial or full URL to an existing Persistent Disk resource. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. If desired, you can also attach existing non-root persistent disks using this property. This field is only applicable for persistent disks. Note that for InstanceTemplate, specify the disk name for zonal disk, and the URL for regional disk.", "type": "string" }, "type": { @@ -39093,7 +39093,7 @@ false "type": "array" }, "sourceImage": { -"description": "The source image to create this disk. When creating a new instance, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required except for local SSD. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set.", +"description": "The source image to create this disk. When creating a new instance boot disk, one of initializeParams.sourceImage or initializeParams.sourceSnapshot or disks.source is required. To create a disk with one of the public operating system images, specify the image by its family name. For example, specify family/debian-9 to use the latest Debian 9 image: projects/debian-cloud/global/images/family/debian-9 Alternatively, use a specific version of a public operating system image: projects/debian-cloud/global/images/debian-9-stretch-vYYYYMMDD To create a disk with a custom image that you created, specify the image name in the following format: global/images/my-custom-image You can also specify a custom image by its image family, which returns the latest version of the image in that family. Replace the image name with family/family-name: global/images/family/my-image-family If the source image is deleted later, this field will not be set.", "type": "string" }, "sourceImageEncryptionKey": { @@ -39101,7 +39101,7 @@ false "description": "The customer-supplied encryption key of the source image. Required if the source image is protected by a customer-supplied encryption key. InstanceTemplate and InstancePropertiesPatch do not store customer-supplied encryption keys, so you cannot create disks for instances in a managed instance group if the source images are encrypted with your own keys." }, "sourceSnapshot": { -"description": "The source snapshot to create this disk. When creating a new instance, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required except for local SSD. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set.", +"description": "The source snapshot to create this disk. When creating a new instance boot disk, one of initializeParams.sourceSnapshot or initializeParams.sourceImage or disks.source is required. To create a disk with a snapshot that you created, specify the snapshot name in the following format: global/snapshots/my-backup If the source snapshot is deleted later, this field will not be set.", "type": "string" }, "sourceSnapshotEncryptionKey": { @@ -50045,6 +50045,14 @@ false "description": "[Output Only] The URL of the region where the managed instance group resides (for regional resources).", "type": "string" }, +"satisfiesPzi": { +"description": "[Output Only] Reserved for future use.", +"type": "boolean" +}, +"satisfiesPzs": { +"description": "[Output Only] Reserved for future use.", +"type": "boolean" +}, "selfLink": { "description": "[Output Only] The URL for this managed instance group. The server defines this URL.", "type": "string" @@ -78322,6 +78330,20 @@ false "description": "URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured.", "type": "string" }, +"tlsEarlyData": { +"description": " Specifies whether TLS 1.3 0-RTT Data (\"Early Data\") should be accepted for this service. Early Data allows a TLS resumption handshake to include the initial application payload (a HTTP request) alongside the handshake, reducing the effective round trips to \"zero\". This applies to TLS 1.3 connections over TCP (HTTP/2) as well as over UDP (QUIC/h3). This can improve application performance, especially on networks where interruptions may be common, such as on mobile. Requests with Early Data will have the \"Early-Data\" HTTP header set on the request, with a value of \"1\", to allow the backend to determine whether Early Data was included. Note: TLS Early Data may allow requests to be replayed, as the data is sent to the backend before the handshake has fully completed. Applications that allow idempotent HTTP methods to make non-idempotent changes, such as a GET request updating a database, should not accept Early Data on those requests, and reject requests with the \"Early-Data: 1\" HTTP header by returning a HTTP 425 (Too Early) status code, in order to remain RFC compliant. The default value is DISABLED.", +"enum": [ +"DISABLED", +"PERMISSIVE", +"STRICT" +], +"enumDescriptions": [ +"TLS 1.3 Early Data is not advertised, and any (invalid) attempts to send Early Data will be rejected by closing the connection.", +"This enables TLS 1.3 0-RTT, and only allows Early Data to be included on requests with safe HTTP methods (GET, HEAD, OPTIONS, TRACE). This mode does not enforce any other limitations for requests with Early Data. The application owner should validate that Early Data is acceptable for a given request path.", +"This enables TLS 1.3 0-RTT, and only allows Early Data to be included on requests with safe HTTP methods (GET, HEAD, OPTIONS, TRACE) without query parameters. Requests that send Early Data with non-idempotent HTTP methods or with query parameters will be rejected with a HTTP 425." +], +"type": "string" +}, "urlMap": { "description": "A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: - https://www.googleapis.compute/v1/projects/project/global/urlMaps/ url-map - projects/project/global/urlMaps/url-map - global/urlMaps/url-map ", "type": "string" @@ -83923,7 +83945,7 @@ false "type": "object" }, "Zone": { -"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-a is located in the us-east1 region. For more information, read Regions and Zones.", +"description": "Represents a Zone resource. A zone is a deployment area. These deployment areas are subsets of a region. For example the zone us-east1-b is located in the us-east1 region. For more information, read Regions and Zones.", "id": "Zone", "properties": { "availableCpuPlatforms": { diff --git a/googleapiclient/discovery_cache/documents/connectors.v1.json b/googleapiclient/discovery_cache/documents/connectors.v1.json index b4890fb766d..b6a976f3b55 100644 --- a/googleapiclient/discovery_cache/documents/connectors.v1.json +++ b/googleapiclient/discovery_cache/documents/connectors.v1.json @@ -2427,7 +2427,7 @@ } } }, -"revision": "20240515", +"revision": "20240529", "rootUrl": "https://connectors.googleapis.com/", "schemas": { "AuditConfig": { @@ -3103,6 +3103,11 @@ "description": "Connectors indicates a specific connector type, e.x. Salesforce, SAP etc.", "id": "Connector", "properties": { +"category": { +"description": "Output only. Category of the connector.", +"readOnly": true, +"type": "string" +}, "createTime": { "description": "Output only. Created time.", "format": "google-datetime", @@ -3166,6 +3171,14 @@ "readOnly": true, "type": "string" }, +"tags": { +"description": "Output only. Tags of the connector.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, "updateTime": { "description": "Output only. Updated time.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/connectors.v2.json b/googleapiclient/discovery_cache/documents/connectors.v2.json index bbafe160a1e..069c44120a5 100644 --- a/googleapiclient/discovery_cache/documents/connectors.v2.json +++ b/googleapiclient/discovery_cache/documents/connectors.v2.json @@ -650,6 +650,62 @@ ] } } +}, +"entitieswithacls": { +"methods": { +"list": { +"description": "Lists entity rows with ACLs of a particular entity type contained in the request. Note: 1. Currently, only max of one 'sort_by' column is supported. 2. If no 'sort_by' column is provided, the primary key of the table is used. If zero or more than one primary key is available, we default to the unpaginated list entities logic which only returns the first page. 3. The values of the 'sort_by' columns must uniquely identify an entity row, otherwise undefined behaviors may be observed during pagination. 4. Since transactions are not supported, any updates, inserts or deletes during pagination can lead to stale data being returned or other unexpected behaviors.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/connections/{connectionsId}/entityTypes/{entityTypesId}/entitieswithacls", +"httpMethod": "GET", +"id": "connectors.projects.locations.connections.entityTypes.entitieswithacls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"conditions": { +"description": "Conditions to be used when listing entities. From a proto standpoint, There are no restrictions on what can be passed using this field. The connector documentation should have information about what format of filters/conditions are supported.", +"location": "query", +"type": "string" +}, +"gsutilUri": { +"description": "Format: gs://object_path", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Number of entity rows to return. Defaults page size = 25. Max page size = 200.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Page token value if available from a previous request.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Resource name of the Entity Type. Format: projects/{project}/locations/{location}/connections/{connection}/entityTypes/{type}", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/connections/[^/]+/entityTypes/[^/]+$", +"required": true, +"type": "string" +}, +"sortBy": { +"description": "List of 'sort_by' columns to use when returning the results.", +"location": "query", +"repeated": true, +"type": "string" +} +}, +"path": "v2/{+parent}/entitieswithacls", +"response": { +"$ref": "ListEntitiesWithACLsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} } } } @@ -660,7 +716,7 @@ } } }, -"revision": "20240515", +"revision": "20240529", "rootUrl": "https://connectors.googleapis.com/", "schemas": { "AccessCredentials": { @@ -683,6 +739,20 @@ }, "type": "object" }, +"AclInfo": { +"description": "AclInfo has a list of readers for a resource. This is defined as per the below docs https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/projects.locations.collections.dataStores.branches.documents#aclinfo", +"id": "AclInfo", +"properties": { +"readers": { +"description": "A list of readers for a resource.", +"items": { +"$ref": "Readers" +}, +"type": "array" +} +}, +"type": "object" +}, "Action": { "description": "Action message contains metadata information about a single action present in the external system.", "id": "Action", @@ -887,6 +957,25 @@ }, "type": "object" }, +"EntityWithACL": { +"description": "EntityWithACL refers to a single row of an entity type with ACL information.", +"id": "EntityWithACL", +"properties": { +"acl_info": { +"$ref": "AclInfo", +"description": "ACL information of the entity." +}, +"id": { +"readOnly": true, +"type": "string" +}, +"jsonData": { +"description": "Entity data in JSON format.", +"type": "string" +} +}, +"type": "object" +}, "ExchangeAuthCodeRequest": { "description": "ExchangeAuthCodeRequest currently includes no fields.", "id": "ExchangeAuthCodeRequest", @@ -1703,6 +1792,24 @@ false }, "type": "object" }, +"ListEntitiesWithACLsResponse": { +"description": "Response message for EntityService.ListEntitiesWithACLs", +"id": "ListEntitiesWithACLsResponse", +"properties": { +"entitiesWithAcl": { +"description": "List containing entity rows.", +"items": { +"$ref": "EntityWithACL" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Next page token if more records are available.", +"type": "string" +} +}, +"type": "object" +}, "ListEntityTypesResponse": { "description": "Response message for EntityService.ListEntityTypes", "id": "ListEntityTypesResponse", @@ -1893,6 +2000,21 @@ false }, "type": "object" }, +"Principal": { +"description": "Principal is a user or group that has access to a resource.", +"id": "Principal", +"properties": { +"group_id": { +"description": "The group that has access to a resource.", +"type": "string" +}, +"user_id": { +"description": "The user that has access to a resource.", +"type": "string" +} +}, +"type": "object" +}, "ProvisionedResource": { "description": "Describes provisioned dataplane resources.", "id": "ProvisionedResource", @@ -2090,6 +2212,20 @@ false }, "type": "object" }, +"Readers": { +"description": "Readers is a list of principals that have read access to a resource.", +"id": "Readers", +"properties": { +"principals": { +"description": "A list of principals that have read access to a resource.", +"items": { +"$ref": "Principal" +}, +"type": "array" +} +}, +"type": "object" +}, "Reference": { "id": "Reference", "properties": { diff --git a/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json b/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json index 361698abff5..c0253abf3bb 100644 --- a/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json @@ -512,7 +512,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://contactcenteraiplatform.googleapis.com/", "schemas": { "AdminUser": { @@ -1166,6 +1166,13 @@ "description": "Container for the VPC-SC networking configurations.", "id": "ServiceAttachment", "properties": { +"allowedProjectIds": { +"description": "The list of project ids that are allowed to send traffic to the service attachment. This field should be filled only for the ingress service attachments.", +"items": { +"type": "string" +}, +"type": "array" +}, "name": { "description": "The service attachment name that will be used for sending private traffic to the CCAIP tenant project. Example: \"projects/${TENANT_PROJECT_ID}/regions/${REGION}/serviceAttachments/ingress-default\".", "type": "string" diff --git a/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json b/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json index 2f992bdc989..6ff3cd43796 100644 --- a/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json +++ b/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json @@ -1474,131 +1474,9 @@ } } }, -"revision": "20240520", +"revision": "20240603", "rootUrl": "https://contactcenterinsights.googleapis.com/", "schemas": { -"GoogleCloudContactcenterinsightsV1AgentCoachingInstruction": { -"description": "Agent Coaching instructions that customer can configure.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingInstruction", -"properties": { -"agentAction": { -"description": "Optional. The action that human agent should take. For example, \"apologize for the slow shipping\". If the users only want to use agent coaching for intent detection, agent_action can be empty", -"type": "string" -}, -"condition": { -"description": "Optional. The condition of the instruction. For example, \"the customer wants to cancel an order\". If the users want the instruction to be triggered unconditionally, the condition can be empty.", -"type": "string" -}, -"description": { -"description": "Optional. The detailed description of this instruction.", -"type": "string" -}, -"displayName": { -"description": "Optional. Display name for the instruction.", -"type": "string" -}, -"metadata": { -"additionalProperties": { -"type": "string" -}, -"description": "Optional. Additional information attached to this instruction.", -"type": "object" -}, -"systemAction": { -"description": "Optional. The action that system should take. For example, \"call GetOrderTime with order_number={order number provided by the customer}\". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1AgentCoachingSuggestion": { -"description": "Suggestion for coaching agents.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestion", -"properties": { -"agentActionSuggestions": { -"description": "Optional. Suggested actions for the agent to take.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentActionSuggestion" -}, -"type": "array" -}, -"applicableInstructions": { -"description": "Optional. Instructions applicable based on the current context.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingInstruction" -}, -"type": "array" -}, -"sampleResponses": { -"description": "Optional. Sample response for the Agent.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionSampleResponse" -}, -"type": "array" -}, -"suggestionEval": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionEval", -"description": "Self evaluation of the suggestion." -}, -"suggestionReasoning": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionReasoning", -"description": "Reasoning for the suggestion." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentActionSuggestion": { -"description": "Actions suggested for the agent. This is based on applicable instructions.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentActionSuggestion", -"properties": { -"agentAction": { -"description": "Optional. The suggested action for the agent.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionEval": { -"description": "Self evaluations of the suggestion.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionEval", -"properties": { -"actionActionSuggestionEval": { -"description": "Optional. Eval for Agent action suggestion.", -"type": "string" -}, -"sampleResponseEval": { -"description": "Optional. Eval for sample response.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionReasoning": { -"description": "Reasoning for the suggestion.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionAgentCoachingSuggestionReasoning", -"properties": { -"agentActionTaken": { -"description": "Optional. The actions that the agent has taken already.", -"type": "string" -}, -"issueSummary": { -"description": "Optional. Summary of the issue.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionSampleResponse": { -"description": "Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems.", -"id": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestionSampleResponse", -"properties": { -"responseText": { -"description": "Optional. Sample response for Agent in text.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1Analysis": { "description": "The analysis resource.", "id": "GoogleCloudContactcenterinsightsV1Analysis", @@ -3049,24 +2927,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1FreeFormSuggestion": { -"description": "Suggestion generated using free form generator.", -"id": "GoogleCloudContactcenterinsightsV1FreeFormSuggestion", -"properties": { -"labels": { -"description": "Optional. Labels for the generator.", -"items": { -"type": "string" -}, -"type": "array" -}, -"response": { -"description": "Required. Free form suggestion.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1GcsSource": { "description": "A Cloud Storage source of conversation data.", "id": "GoogleCloudContactcenterinsightsV1GcsSource", @@ -3082,162 +2942,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1GeneratorSuggestion": { -"description": "Suggestion generated using a Generator.", -"id": "GoogleCloudContactcenterinsightsV1GeneratorSuggestion", -"properties": { -"agentCoachingSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1AgentCoachingSuggestion", -"description": "Optional. Suggestion to coach the agent." -}, -"freeFormSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1FreeFormSuggestion", -"description": "Optional. Free form suggestion." -}, -"summarySuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1SummarySuggestion", -"description": "Optional. Suggested summary." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetGeneratorSuggestionResponse": { -"description": "Represents response from generators.", -"id": "GoogleCloudContactcenterinsightsV1GetGeneratorSuggestionResponse", -"properties": { -"generatorSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1GeneratorSuggestion", -"description": "The suggestion generated from the Generator." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponse": { -"description": "Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponse", -"properties": { -"suggestedQuery": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseSuggestedQuery", -"description": "The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion." -}, -"suggestedQueryAnswer": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswer", -"description": "The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswer": { -"description": "Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswer", -"properties": { -"answerText": { -"description": "The piece of text from the `source` that answers this suggested query.", -"type": "string" -}, -"faqSource": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource", -"description": "Populated if the prediction came from FAQ." -}, -"generativeSource": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource", -"description": "Populated if the prediction was Generative." -}, -"intentMatchingSource": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource", -"description": "Populated if the prediction was from intent matching." -}, -"matchConfidence": { -"description": "The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain).", -"format": "float", -"type": "number" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource": { -"description": "Details about source of FAQ answer.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource", -"properties": { -"document": { -"description": "Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`.", -"type": "string" -}, -"question": { -"description": "The corresponding FAQ question.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource": { -"description": "Details about source of Generative answer.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource", -"properties": { -"snippets": { -"description": "All snippets used for this Generative Prediction, with their source URI and data.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet" -}, -"type": "array" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet": { -"description": "Snippet Source for a Generative Prediction.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet", -"properties": { -"document": { -"description": "Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`.", -"type": "string" -}, -"text": { -"description": "text taken from that URI.", -"type": "string" -}, -"title": { -"description": "Title of the document.", -"type": "string" -}, -"uri": { -"description": "URI the data is sourced from.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource": { -"description": "Details about source of Intent Matching answer.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource", -"properties": { -"title": { -"description": "Title of the document.", -"type": "string" -}, -"uri": { -"description": "URI the data is sourced from.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseSuggestedQuery": { -"description": "Represents a suggested query.", -"id": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponseSuggestedQuery", -"properties": { -"queryText": { -"description": "Suggested query text.", -"type": "string" -}, -"score": { -"description": "Suggested query score.", -"format": "float", -"type": "number" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1HoldData": { "description": "The data for a hold annotation.", "id": "GoogleCloudContactcenterinsightsV1HoldData", @@ -4072,18 +3776,6 @@ "$ref": "GoogleCloudContactcenterinsightsV1FaqAnswerData", "description": "Agent Assist FAQ answer data." }, -"generatorSuggestionResult": { -"$ref": "GoogleCloudContactcenterinsightsV1GetGeneratorSuggestionResponse", -"description": "The generator suggestion result." -}, -"knowledgeAssistResult": { -"$ref": "GoogleCloudContactcenterinsightsV1GetKnowledgeAssistResponse", -"description": "The Knowledge Assist result." -}, -"knowledgeSearchResult": { -"$ref": "GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswer", -"description": "The Knowledge Search result." -}, "smartComposeSuggestion": { "$ref": "GoogleCloudContactcenterinsightsV1SmartComposeSuggestionData", "description": "Agent Assist Smart Compose suggestion data." @@ -4095,71 +3787,24 @@ "startBoundary": { "$ref": "GoogleCloudContactcenterinsightsV1AnnotationBoundary", "description": "The boundary in the conversation where the annotation starts, inclusive." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswer": { -"description": "Represents a SearchKnowledge answer.", -"id": "GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswer", -"properties": { -"answer": { -"description": "The piece of text from the knowledge base documents that answers the search query", -"type": "string" -}, -"answerRecord": { -"description": "The name of the answer record. Format: `projects//locations//answer Records/`", -"type": "string" -}, -"answerSources": { -"description": "All sources used to generate the answer.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswerAnswerSource" -}, -"type": "array" -}, -"answerType": { -"description": "The type of the answer.", -"enum": [ -"ANSWER_TYPE_UNSPECIFIED", -"FAQ", -"GENERATIVE", -"INTENT" -], -"enumDescriptions": [ -"The answer has a unspecified type.", -"The answer is from FAQ documents.", -"The answer is from generative model.", -"The answer is from intent matching." -], -"type": "string" }, -"confidenceScore": { -"description": "The confidence score in [0.0, 1.0] range.", -"format": "float", -"type": "number" +"userInput": { +"$ref": "GoogleCloudContactcenterinsightsV1RuntimeAnnotationUserInput", +"description": "Explicit input used for generating the answer" } }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswerAnswerSource": { -"description": "The sources of the answers.", -"id": "GoogleCloudContactcenterinsightsV1SearchKnowledgeAnswerAnswerSource", +"GoogleCloudContactcenterinsightsV1RuntimeAnnotationUserInput": { +"description": "Explicit input used for generating the answer", +"id": "GoogleCloudContactcenterinsightsV1RuntimeAnnotationUserInput", "properties": { -"document": { -"description": "The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/`", +"generatorName": { +"description": "The resource name of associated generator. Format: `projects//locations//generators/`", "type": "string" }, -"snippet": { -"description": "The relevant snippet of the article.", -"type": "string" -}, -"title": { -"description": "The title of the article.", -"type": "string" -}, -"uri": { -"description": "The URI of the article.", +"query": { +"description": "Query text. Article Search uses this to store the input query used to generate the search results.", "type": "string" } }, @@ -4325,35 +3970,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1SummarySuggestion": { -"description": "Suggested summary of the conversation.", -"id": "GoogleCloudContactcenterinsightsV1SummarySuggestion", -"properties": { -"summarySections": { -"description": "Required. All the parts of generated summary.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1SummarySuggestionSummarySection" -}, -"type": "array" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1SummarySuggestionSummarySection": { -"description": "A component of the generated summary.", -"id": "GoogleCloudContactcenterinsightsV1SummarySuggestionSummarySection", -"properties": { -"section": { -"description": "Required. Name of the section.", -"type": "string" -}, -"summary": { -"description": "Required. Summary text for the section.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1UndeployIssueModelMetadata": { "description": "Metadata for undeploying an issue model.", "id": "GoogleCloudContactcenterinsightsV1UndeployIssueModelMetadata", @@ -4486,128 +4102,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingInstruction": { -"description": "Agent Coaching instructions that customer can configure.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingInstruction", -"properties": { -"agentAction": { -"description": "Optional. The action that human agent should take. For example, \"apologize for the slow shipping\". If the users only want to use agent coaching for intent detection, agent_action can be empty", -"type": "string" -}, -"condition": { -"description": "Optional. The condition of the instruction. For example, \"the customer wants to cancel an order\". If the users want the instruction to be triggered unconditionally, the condition can be empty.", -"type": "string" -}, -"description": { -"description": "Optional. The detailed description of this instruction.", -"type": "string" -}, -"displayName": { -"description": "Optional. Display name for the instruction.", -"type": "string" -}, -"metadata": { -"additionalProperties": { -"type": "string" -}, -"description": "Optional. Additional information attached to this instruction.", -"type": "object" -}, -"systemAction": { -"description": "Optional. The action that system should take. For example, \"call GetOrderTime with order_number={order number provided by the customer}\". If the users don't have plugins or don't want to trigger plugins, the system_action can be empty", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestion": { -"description": "Suggestion for coaching agents.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestion", -"properties": { -"agentActionSuggestions": { -"description": "Optional. Suggested actions for the agent to take.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentActionSuggestion" -}, -"type": "array" -}, -"applicableInstructions": { -"description": "Optional. Instructions applicable based on the current context.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingInstruction" -}, -"type": "array" -}, -"sampleResponses": { -"description": "Optional. Sample response for the Agent.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionSampleResponse" -}, -"type": "array" -}, -"suggestionEval": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionEval", -"description": "Self evaluation of the suggestion." -}, -"suggestionReasoning": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionReasoning", -"description": "Reasoning for the suggestion." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentActionSuggestion": { -"description": "Actions suggested for the agent. This is based on applicable instructions.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentActionSuggestion", -"properties": { -"agentAction": { -"description": "Optional. The suggested action for the agent.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionEval": { -"description": "Self evaluations of the suggestion.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionEval", -"properties": { -"actionActionSuggestionEval": { -"description": "Optional. Eval for Agent action suggestion.", -"type": "string" -}, -"sampleResponseEval": { -"description": "Optional. Eval for sample response.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionReasoning": { -"description": "Reasoning for the suggestion.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionAgentCoachingSuggestionReasoning", -"properties": { -"agentActionTaken": { -"description": "Optional. The actions that the agent has taken already.", -"type": "string" -}, -"issueSummary": { -"description": "Optional. Summary of the issue.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionSampleResponse": { -"description": "Sample response that the agent can use. This could be based on applicable instructions and ingested data from other systems.", -"id": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestionSampleResponse", -"properties": { -"responseText": { -"description": "Optional. Sample response for Agent in text.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1alpha1Analysis": { "description": "The analysis resource.", "id": "GoogleCloudContactcenterinsightsV1alpha1Analysis", @@ -5942,24 +5436,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1alpha1FreeFormSuggestion": { -"description": "Suggestion generated using free form generator.", -"id": "GoogleCloudContactcenterinsightsV1alpha1FreeFormSuggestion", -"properties": { -"labels": { -"description": "Optional. Labels for the generator.", -"items": { -"type": "string" -}, -"type": "array" -}, -"response": { -"description": "Required. Free form suggestion.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1alpha1GcsSource": { "description": "A Cloud Storage source of conversation data.", "id": "GoogleCloudContactcenterinsightsV1alpha1GcsSource", @@ -5975,162 +5451,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1alpha1GeneratorSuggestion": { -"description": "Suggestion generated using a Generator.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GeneratorSuggestion", -"properties": { -"agentCoachingSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1AgentCoachingSuggestion", -"description": "Optional. Suggestion to coach the agent." -}, -"freeFormSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1FreeFormSuggestion", -"description": "Optional. Free form suggestion." -}, -"summarySuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1SummarySuggestion", -"description": "Optional. Suggested summary." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetGeneratorSuggestionResponse": { -"description": "Represents response from generators.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetGeneratorSuggestionResponse", -"properties": { -"generatorSuggestion": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GeneratorSuggestion", -"description": "The suggestion generated from the Generator." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponse": { -"description": "Response for Knowledge Assist. Contains suggested query and optionally includes an answer for the query.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponse", -"properties": { -"suggestedQuery": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseSuggestedQuery", -"description": "The query suggested based on the context. Suggestion is made only if it is different from the previous suggestion." -}, -"suggestedQueryAnswer": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswer", -"description": "The answer generated for the suggested query. Whether or not an answer is generated depends on how confident we are about the generated query." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswer": { -"description": "Represents an answer from Knowledge. Cuurently supports FAQ and Generative answers.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswer", -"properties": { -"answerText": { -"description": "The piece of text from the `source` that answers this suggested query.", -"type": "string" -}, -"faqSource": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource", -"description": "Populated if the prediction came from FAQ." -}, -"generativeSource": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource", -"description": "Populated if the prediction was Generative." -}, -"intentMatchingSource": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource", -"description": "Populated if the prediction was from intent matching." -}, -"matchConfidence": { -"description": "The system's confidence score that this answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain).", -"format": "float", -"type": "number" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource": { -"description": "Details about source of FAQ answer.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerFaqSource", -"properties": { -"document": { -"description": "Indicates which Knowledge Document this answer was extracted from. Format: `projects//knowledgeBases//documents/`.", -"type": "string" -}, -"question": { -"description": "The corresponding FAQ question.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource": { -"description": "Details about source of Generative answer.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSource", -"properties": { -"snippets": { -"description": "All snippets used for this Generative Prediction, with their source URI and data.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet" -}, -"type": "array" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet": { -"description": "Snippet Source for a Generative Prediction.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerGenerativeSourceSnippet", -"properties": { -"document": { -"description": "Indicates which Knowledge Document this snippet was extracted from. Format: `projects//knowledgeBases//documents/`.", -"type": "string" -}, -"text": { -"description": "text taken from that URI.", -"type": "string" -}, -"title": { -"description": "Title of the document.", -"type": "string" -}, -"uri": { -"description": "URI the data is sourced from.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource": { -"description": "Details about source of Intent Matching answer.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseKnowledgeAnswerIntentMatchingSource", -"properties": { -"title": { -"description": "Title of the document.", -"type": "string" -}, -"uri": { -"description": "URI the data is sourced from.", -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseSuggestedQuery": { -"description": "Represents a suggested query.", -"id": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponseSuggestedQuery", -"properties": { -"queryText": { -"description": "Suggested query text.", -"type": "string" -}, -"score": { -"description": "Suggested query score.", -"format": "float", -"type": "number" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1alpha1HoldData": { "description": "The data for a hold annotation.", "id": "GoogleCloudContactcenterinsightsV1alpha1HoldData", @@ -6687,18 +6007,6 @@ "$ref": "GoogleCloudContactcenterinsightsV1alpha1FaqAnswerData", "description": "Agent Assist FAQ answer data." }, -"generatorSuggestionResult": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetGeneratorSuggestionResponse", -"description": "The generator suggestion result." -}, -"knowledgeAssistResult": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1GetKnowledgeAssistResponse", -"description": "The Knowledge Assist result." -}, -"knowledgeSearchResult": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswer", -"description": "The Knowledge Search result." -}, "smartComposeSuggestion": { "$ref": "GoogleCloudContactcenterinsightsV1alpha1SmartComposeSuggestionData", "description": "Agent Assist Smart Compose suggestion data." @@ -6710,71 +6018,24 @@ "startBoundary": { "$ref": "GoogleCloudContactcenterinsightsV1alpha1AnnotationBoundary", "description": "The boundary in the conversation where the annotation starts, inclusive." -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswer": { -"description": "Represents a SearchKnowledge answer.", -"id": "GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswer", -"properties": { -"answer": { -"description": "The piece of text from the knowledge base documents that answers the search query", -"type": "string" -}, -"answerRecord": { -"description": "The name of the answer record. Format: `projects//locations//answer Records/`", -"type": "string" -}, -"answerSources": { -"description": "All sources used to generate the answer.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswerAnswerSource" -}, -"type": "array" -}, -"answerType": { -"description": "The type of the answer.", -"enum": [ -"ANSWER_TYPE_UNSPECIFIED", -"FAQ", -"GENERATIVE", -"INTENT" -], -"enumDescriptions": [ -"The answer has a unspecified type.", -"The answer is from FAQ documents.", -"The answer is from generative model.", -"The answer is from intent matching." -], -"type": "string" }, -"confidenceScore": { -"description": "The confidence score in [0.0, 1.0] range.", -"format": "float", -"type": "number" +"userInput": { +"$ref": "GoogleCloudContactcenterinsightsV1alpha1RuntimeAnnotationUserInput", +"description": "Explicit input used for generating the answer" } }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswerAnswerSource": { -"description": "The sources of the answers.", -"id": "GoogleCloudContactcenterinsightsV1alpha1SearchKnowledgeAnswerAnswerSource", +"GoogleCloudContactcenterinsightsV1alpha1RuntimeAnnotationUserInput": { +"description": "Explicit input used for generating the answer", +"id": "GoogleCloudContactcenterinsightsV1alpha1RuntimeAnnotationUserInput", "properties": { -"document": { -"description": "The document from which the snippet was extracted. Format: `projects//knowledgeBases//documents/`", +"generatorName": { +"description": "The resource name of associated generator. Format: `projects//locations//generators/`", "type": "string" }, -"snippet": { -"description": "The relevant snippet of the article.", -"type": "string" -}, -"title": { -"description": "The title of the article.", -"type": "string" -}, -"uri": { -"description": "The URI of the article.", +"query": { +"description": "Query text. Article Search uses this to store the input query used to generate the search results.", "type": "string" } }, @@ -6868,35 +6129,6 @@ }, "type": "object" }, -"GoogleCloudContactcenterinsightsV1alpha1SummarySuggestion": { -"description": "Suggested summary of the conversation.", -"id": "GoogleCloudContactcenterinsightsV1alpha1SummarySuggestion", -"properties": { -"summarySections": { -"description": "Required. All the parts of generated summary.", -"items": { -"$ref": "GoogleCloudContactcenterinsightsV1alpha1SummarySuggestionSummarySection" -}, -"type": "array" -} -}, -"type": "object" -}, -"GoogleCloudContactcenterinsightsV1alpha1SummarySuggestionSummarySection": { -"description": "A component of the generated summary.", -"id": "GoogleCloudContactcenterinsightsV1alpha1SummarySuggestionSummarySection", -"properties": { -"section": { -"description": "Required. Name of the section.", -"type": "string" -}, -"summary": { -"description": "Required. Summary text for the section.", -"type": "string" -} -}, -"type": "object" -}, "GoogleCloudContactcenterinsightsV1alpha1UndeployIssueModelMetadata": { "description": "Metadata for undeploying an issue model.", "id": "GoogleCloudContactcenterinsightsV1alpha1UndeployIssueModelMetadata", diff --git a/googleapiclient/discovery_cache/documents/container.v1.json b/googleapiclient/discovery_cache/documents/container.v1.json index 4899e16f2e6..05956d580a4 100644 --- a/googleapiclient/discovery_cache/documents/container.v1.json +++ b/googleapiclient/discovery_cache/documents/container.v1.json @@ -2540,7 +2540,7 @@ } } }, -"revision": "20240510", +"revision": "20240514", "rootUrl": "https://container.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/container.v1beta1.json b/googleapiclient/discovery_cache/documents/container.v1beta1.json index eb9ab8c563b..88fa532ac15 100644 --- a/googleapiclient/discovery_cache/documents/container.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/container.v1beta1.json @@ -2565,7 +2565,7 @@ } } }, -"revision": "20240510", +"revision": "20240514", "rootUrl": "https://container.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1.json index 14e6ba66a2a..fb2b1f4f28f 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1.json @@ -1065,7 +1065,7 @@ } } }, -"revision": "20240516", +"revision": "20240524", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AliasContext": { @@ -2569,7 +2569,7 @@ "type": "string" }, "diskSizeGb": { -"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 2000GB; builds that request more than the maximum are rejected with an error.", +"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 4000GB; builds that request more than the maximum are rejected with an error.", "format": "int64", "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json index cbbf6cdfbb9..5041cc03884 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json @@ -1233,7 +1233,7 @@ } } }, -"revision": "20240516", +"revision": "20240524", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AnalysisCompleted": { @@ -2543,7 +2543,7 @@ "type": "string" }, "diskSizeGb": { -"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 2000GB; builds that request more than the maximum are rejected with an error.", +"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 4000GB; builds that request more than the maximum are rejected with an error.", "format": "int64", "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json index cbb8221a6f5..831d370c669 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json @@ -1121,7 +1121,7 @@ } } }, -"revision": "20240516", +"revision": "20240524", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AliasContext": { @@ -2525,7 +2525,7 @@ "type": "string" }, "diskSizeGb": { -"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 2000GB; builds that request more than the maximum are rejected with an error.", +"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 4000GB; builds that request more than the maximum are rejected with an error.", "format": "int64", "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/content.v2.1.json b/googleapiclient/discovery_cache/documents/content.v2.1.json index e6a8211f322..b23299a2fb3 100644 --- a/googleapiclient/discovery_cache/documents/content.v2.1.json +++ b/googleapiclient/discovery_cache/documents/content.v2.1.json @@ -6219,7 +6219,7 @@ } } }, -"revision": "20240522", +"revision": "20240529", "rootUrl": "https://shoppingcontent.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/customsearch.v1.json b/googleapiclient/discovery_cache/documents/customsearch.v1.json index 2be1063e0ad..d76a3c16f49 100644 --- a/googleapiclient/discovery_cache/documents/customsearch.v1.json +++ b/googleapiclient/discovery_cache/documents/customsearch.v1.json @@ -702,7 +702,7 @@ false } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://customsearch.googleapis.com/", "schemas": { "Promotion": { diff --git a/googleapiclient/discovery_cache/documents/datamigration.v1.json b/googleapiclient/discovery_cache/documents/datamigration.v1.json index c257dc77c73..398562c8ce6 100644 --- a/googleapiclient/discovery_cache/documents/datamigration.v1.json +++ b/googleapiclient/discovery_cache/documents/datamigration.v1.json @@ -2125,7 +2125,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://datamigration.googleapis.com/", "schemas": { "AlloyDbConnectionProfile": { @@ -2537,7 +2537,8 @@ "POSTGRES_12", "POSTGRES_13", "POSTGRES_14", -"POSTGRES_15" +"POSTGRES_15", +"POSTGRES_16" ], "enumDescriptions": [ "Unspecified version.", @@ -2561,7 +2562,8 @@ "PostgreSQL 12.", "PostgreSQL 13.", "PostgreSQL 14.", -"PostgreSQL 15." +"PostgreSQL 15.", +"PostgreSQL 16." ], "type": "string" }, @@ -5729,6 +5731,10 @@ "$ref": "SqlServerDatabaseBackup" }, "type": "array" +}, +"useDiffBackup": { +"description": "Optional. Enable differential backups.", +"type": "boolean" } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json b/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json index a78e6f36b53..c97e3988a6f 100644 --- a/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json @@ -1049,7 +1049,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://datamigration.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/datapipelines.v1.json b/googleapiclient/discovery_cache/documents/datapipelines.v1.json index 03f59b1a745..4b4d24fb514 100644 --- a/googleapiclient/discovery_cache/documents/datapipelines.v1.json +++ b/googleapiclient/discovery_cache/documents/datapipelines.v1.json @@ -369,7 +369,7 @@ } } }, -"revision": "20240512", +"revision": "20240526", "rootUrl": "https://datapipelines.googleapis.com/", "schemas": { "GoogleCloudDatapipelinesV1DataflowJobDetails": { diff --git a/googleapiclient/discovery_cache/documents/dataplex.v1.json b/googleapiclient/discovery_cache/documents/dataplex.v1.json index 39191206611..96c2a075958 100644 --- a/googleapiclient/discovery_cache/documents/dataplex.v1.json +++ b/googleapiclient/discovery_cache/documents/dataplex.v1.json @@ -5271,7 +5271,7 @@ } } }, -"revision": "20240513", +"revision": "20240523", "rootUrl": "https://dataplex.googleapis.com/", "schemas": { "Empty": { @@ -6888,7 +6888,7 @@ }, "sqlAssertion": { "$ref": "GoogleCloudDataplexV1DataQualityRuleSqlAssertion", -"description": "Aggregate rule which evaluates the number of rows returned for the provided statement." +"description": "Aggregate rule which evaluates the number of rows returned for the provided statement. If any rows are returned, this rule fails." }, "statisticRangeExpectation": { "$ref": "GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation", @@ -6955,7 +6955,7 @@ "id": "GoogleCloudDataplexV1DataQualityRuleResult", "properties": { "assertionRowCount": { -"description": "Output only. The number of rows returned by the sql statement in the SqlAssertion rule.This field is only valid for SqlAssertion rules.", +"description": "Output only. The number of rows returned by the SQL statement in a SQL assertion rule.This field is only valid for SQL assertion rules.", "format": "int64", "readOnly": true, "type": "string" @@ -7021,7 +7021,7 @@ "type": "object" }, "GoogleCloudDataplexV1DataQualityRuleSqlAssertion": { -"description": "Queries for rows returned by the provided SQL statement. If any rows are are returned, this rule fails.The SQL statement needs to use BigQuery standard SQL syntax, and must not contain any semicolons.${data()} can be used to reference the rows being evaluated, i.e. the table after all additional filters (row filters, incremental data filters, sampling) are applied.Example: SELECT * FROM ${data()} WHERE price < 0", +"description": "A SQL statement that is evaluated to return rows that match an invalid state. If any rows are are returned, this rule fails.The SQL statement must use BigQuery standard SQL syntax, and must not contain any semicolons.You can use the data reference parameter ${data()} to reference the source table with all of its precondition filters applied. Examples of precondition filters include row filters, incremental data filters, and sampling. For more information, see Data reference parameter (https://cloud.google.com/dataplex/docs/auto-data-quality-overview#data-reference-parameter).Example: SELECT * FROM ${data()} WHERE price < 0", "id": "GoogleCloudDataplexV1DataQualityRuleSqlAssertion", "properties": { "sqlStatement": { @@ -7092,7 +7092,7 @@ "id": "GoogleCloudDataplexV1DataQualityScanRuleResult", "properties": { "assertionRowCount": { -"description": "The number of rows returned by the sql statement in the SqlAssertion rule. This field is only valid for SqlAssertion rules.", +"description": "The number of rows returned by the SQL statement in a SQL assertion rule. This field is only valid for SQL assertion rules.", "format": "int64", "type": "string" }, @@ -7175,15 +7175,15 @@ ], "enumDescriptions": [ "An unspecified rule type.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#nonnullexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#rangeexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#regexexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#rowconditionexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#setexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#statisticrangeexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#tableconditionexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#uniquenessexpectation.", -"Please see https://cloud.google.com/dataplex/docs/reference/rest/v1/DataQualityRule#sqlAssertion." +"See DataQualityRule.NonNullExpectation.", +"See DataQualityRule.RangeExpectation.", +"See DataQualityRule.RegexExpectation.", +"See DataQualityRule.RowConditionExpectation.", +"See DataQualityRule.SetExpectation.", +"See DataQualityRule.StatisticRangeExpectation.", +"See DataQualityRule.TableConditionExpectation.", +"See DataQualityRule.UniquenessExpectation.", +"See DataQualityRule.SqlAssertion." ], "type": "string" }, @@ -9860,26 +9860,29 @@ "id": "GoogleCloudDataplexV1SearchEntriesResult", "properties": { "dataplexEntry": { -"$ref": "GoogleCloudDataplexV1Entry", -"description": "Entry format of the result." +"$ref": "GoogleCloudDataplexV1Entry" }, "linkedResource": { +"deprecated": true, "description": "Linked resource name.", "type": "string" }, "snippets": { "$ref": "GoogleCloudDataplexV1SearchEntriesResultSnippets", +"deprecated": true, "description": "Snippets." } }, "type": "object" }, "GoogleCloudDataplexV1SearchEntriesResultSnippets": { +"deprecated": true, "description": "Snippets for the entry, contains HTML-style highlighting for matched tokens, will be used in UI.", "id": "GoogleCloudDataplexV1SearchEntriesResultSnippets", "properties": { "dataplexEntry": { "$ref": "GoogleCloudDataplexV1Entry", +"deprecated": true, "description": "Entry" } }, diff --git a/googleapiclient/discovery_cache/documents/dataportability.v1.json b/googleapiclient/discovery_cache/documents/dataportability.v1.json index 502b0ab7e48..7a3880a888b 100644 --- a/googleapiclient/discovery_cache/documents/dataportability.v1.json +++ b/googleapiclient/discovery_cache/documents/dataportability.v1.json @@ -641,7 +641,7 @@ } } }, -"revision": "20240524", +"revision": "20240602", "rootUrl": "https://dataportability.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/dataportability.v1beta.json b/googleapiclient/discovery_cache/documents/dataportability.v1beta.json index 88317cd7ea1..a18e88a0f3d 100644 --- a/googleapiclient/discovery_cache/documents/dataportability.v1beta.json +++ b/googleapiclient/discovery_cache/documents/dataportability.v1beta.json @@ -641,7 +641,7 @@ } } }, -"revision": "20240524", +"revision": "20240602", "rootUrl": "https://dataportability.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/dataproc.v1.json b/googleapiclient/discovery_cache/documents/dataproc.v1.json index ad373b70349..486c55e866d 100644 --- a/googleapiclient/discovery_cache/documents/dataproc.v1.json +++ b/googleapiclient/discovery_cache/documents/dataproc.v1.json @@ -3072,7 +3072,7 @@ } } }, -"revision": "20240505", +"revision": "20240525", "rootUrl": "https://dataproc.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/datastream.v1.json b/googleapiclient/discovery_cache/documents/datastream.v1.json index 2be378a3bc7..70a56649461 100644 --- a/googleapiclient/discovery_cache/documents/datastream.v1.json +++ b/googleapiclient/discovery_cache/documents/datastream.v1.json @@ -1250,7 +1250,7 @@ } } }, -"revision": "20240501", +"revision": "20240515", "rootUrl": "https://datastream.googleapis.com/", "schemas": { "AppendOnly": { @@ -2861,6 +2861,12 @@ }, "type": "object" }, +"SqlServerChangeTables": { +"description": "Configuration to use Change Tables CDC read method.", +"id": "SqlServerChangeTables", +"properties": {}, +"type": "object" +}, "SqlServerColumn": { "description": "SQLServer Column.", "id": "SqlServerColumn", @@ -2983,6 +2989,10 @@ "description": "SQLServer data source configuration", "id": "SqlServerSourceConfig", "properties": { +"changeTables": { +"$ref": "SqlServerChangeTables", +"description": "CDC reader reads from change tables." +}, "excludeObjects": { "$ref": "SqlServerRdbms", "description": "SQLServer objects to exclude from the stream." @@ -3000,6 +3010,10 @@ "description": "Max concurrent CDC tasks.", "format": "int32", "type": "integer" +}, +"transactionLogs": { +"$ref": "SqlServerTransactionLogs", +"description": "CDC reader reads from transaction logs." } }, "type": "object" @@ -3022,6 +3036,12 @@ }, "type": "object" }, +"SqlServerTransactionLogs": { +"description": "Configuration to use Transaction Logs CDC read method.", +"id": "SqlServerTransactionLogs", +"properties": {}, +"type": "object" +}, "StartBackfillJobRequest": { "description": "Request for manually initiating a backfill job for a specific stream object.", "id": "StartBackfillJobRequest", diff --git a/googleapiclient/discovery_cache/documents/developerconnect.v1.json b/googleapiclient/discovery_cache/documents/developerconnect.v1.json index 284355ae888..e732722b716 100644 --- a/googleapiclient/discovery_cache/documents/developerconnect.v1.json +++ b/googleapiclient/discovery_cache/documents/developerconnect.v1.json @@ -840,7 +840,7 @@ } } }, -"revision": "20240523", +"revision": "20240527", "rootUrl": "https://developerconnect.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v2.json b/googleapiclient/discovery_cache/documents/dialogflow.v2.json index bfcb78df8e2..42f495edf71 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v2.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v2.json @@ -3741,6 +3741,81 @@ } } }, +"generators": { +"methods": { +"create": { +"description": "Creates a generator.", +"flatPath": "v2/projects/{projectsId}/generators", +"httpMethod": "POST", +"id": "dialogflow.projects.generators.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"generatorId": { +"description": "Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to create generator for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/generators", +"request": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"list": { +"description": "Lists generators.", +"flatPath": "v2/projects/{projectsId}/generators", +"httpMethod": "GET", +"id": "dialogflow.projects.generators.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. Maximum number of conversation models to return in a single page. Default to 10.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. The next_page_token value returned from a previous list request.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to list generators for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/generators", +"response": { +"$ref": "GoogleCloudDialogflowV2ListGeneratorsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "knowledgeBases": { "methods": { "create": { @@ -7584,6 +7659,168 @@ } } }, +"generators": { +"methods": { +"create": { +"description": "Creates a generator.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/generators", +"httpMethod": "POST", +"id": "dialogflow.projects.locations.generators.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"generatorId": { +"description": "Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to create generator for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/generators", +"request": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"delete": { +"description": "Deletes a generator.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "DELETE", +"id": "dialogflow.projects.locations.generators.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The generator resource name to delete. Format: `projects//locations//generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"get": { +"description": "Retrieves a generator.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "GET", +"id": "dialogflow.projects.locations.generators.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The generator resource name to retrieve. Format: `projects//locations/`/generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+name}", +"response": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"list": { +"description": "Lists generators.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/generators", +"httpMethod": "GET", +"id": "dialogflow.projects.locations.generators.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. Maximum number of conversation models to return in a single page. Default to 10.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. The next_page_token value returned from a previous list request.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to list generators for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/generators", +"response": { +"$ref": "GoogleCloudDialogflowV2ListGeneratorsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"patch": { +"description": "Updates a generator.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "PATCH", +"id": "dialogflow.projects.locations.generators.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. The list of fields to update.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v2/{+name}", +"request": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "knowledgeBases": { "methods": { "create": { @@ -8100,6 +8337,39 @@ } } }, +"statelessSuggestion": { +"methods": { +"generate": { +"description": "Generates and returns a suggestion for a conversation that does not have a resource created for it.", +"flatPath": "v2/projects/{projectsId}/locations/{locationsId}/statelessSuggestion:generate", +"httpMethod": "POST", +"id": "dialogflow.projects.locations.statelessSuggestion.generate", +"parameterOrder": [ +"parent" +], +"parameters": { +"parent": { +"description": "Required. The parent resource to charge for the Suggestion's generation. Format: `projects//locations/`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/statelessSuggestion:generate", +"request": { +"$ref": "GoogleCloudDialogflowV2GenerateStatelessSuggestionRequest" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2GenerateStatelessSuggestionResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "suggestions": { "methods": { "generateStatelessSummary": { @@ -8327,7 +8597,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { @@ -13966,6 +14236,20 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2ConversationContext": { +"description": "Context of the conversation, including transcripts.", +"id": "GoogleCloudDialogflowV2ConversationContext", +"properties": { +"messageEntries": { +"description": "Optional. List of message transcripts in the conversation.", +"items": { +"$ref": "GoogleCloudDialogflowV2MessageEntry" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2ConversationDataset": { "description": "Represents a conversation dataset that a user imports raw data into. The data inside ConversationDataset can not be changed after ImportConversationData finishes (and calling ImportConversationData on a dataset that already has data is not allowed).", "id": "GoogleCloudDialogflowV2ConversationDataset", @@ -14927,6 +15211,32 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2FewShotExample": { +"description": "Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10", +"id": "GoogleCloudDialogflowV2FewShotExample", +"properties": { +"conversationContext": { +"$ref": "GoogleCloudDialogflowV2ConversationContext", +"description": "Optional. Conversation transcripts." +}, +"extraInfo": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains \"@price\", and ingested data has <\"price\", \"10\">", +"type": "object" +}, +"output": { +"$ref": "GoogleCloudDialogflowV2GeneratorSuggestion", +"description": "Required. Example output of the model." +}, +"summarizationSectionList": { +"$ref": "GoogleCloudDialogflowV2SummarizationSectionList", +"description": "Summarization sections." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2Fulfillment": { "description": "By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday. For more information, see the [fulfillment guide](https://cloud.google.com/dialogflow/docs/fulfillment-overview).", "id": "GoogleCloudDialogflowV2Fulfillment", @@ -15032,6 +15342,53 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2GenerateStatelessSuggestionRequest": { +"description": "The request message for Conversations.GenerateStatelessSuggestion.", +"id": "GoogleCloudDialogflowV2GenerateStatelessSuggestionRequest", +"properties": { +"conversationContext": { +"$ref": "GoogleCloudDialogflowV2ConversationContext", +"description": "Optional. Context of the conversation, including transcripts." +}, +"generator": { +"$ref": "GoogleCloudDialogflowV2Generator", +"description": "Uncreated generator. It should be a complete generator that includes all information about the generator." +}, +"generatorName": { +"description": "The resource name of the existing created generator. Format: `projects//locations//generators/`", +"type": "string" +}, +"triggerEvents": { +"description": "Optional. A list of trigger events. Generator will be triggered only if it's trigger event is included here.", +"items": { +"enum": [ +"TRIGGER_EVENT_UNSPECIFIED", +"END_OF_UTTERANCE", +"MANUAL_CALL" +], +"enumDescriptions": [ +"Default value for TriggerEvent.", +"Triggers when each chat message or voice utterance ends.", +"Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2GenerateStatelessSuggestionResponse": { +"description": "The response message for Conversations.GenerateStatelessSuggestion.", +"id": "GoogleCloudDialogflowV2GenerateStatelessSuggestionResponse", +"properties": { +"generatorSuggestion": { +"$ref": "GoogleCloudDialogflowV2GeneratorSuggestion", +"description": "Required. Generated suggestion for a conversation." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2GenerateStatelessSummaryRequest": { "description": "The request message for Conversations.GenerateStatelessSummary.", "id": "GoogleCloudDialogflowV2GenerateStatelessSummaryRequest", @@ -15112,6 +15469,67 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2Generator": { +"description": "LLM generator.", +"id": "GoogleCloudDialogflowV2Generator", +"properties": { +"createTime": { +"description": "Output only. Creation time of this generator.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"description": { +"description": "Optional. Human readable description of the generator.", +"type": "string" +}, +"inferenceParameter": { +"$ref": "GoogleCloudDialogflowV2InferenceParameter", +"description": "Optional. Inference parameters for this generator." +}, +"name": { +"description": "Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`", +"readOnly": true, +"type": "string" +}, +"summarizationContext": { +"$ref": "GoogleCloudDialogflowV2SummarizationContext", +"description": "Input of prebuilt Summarization feature." +}, +"triggerEvent": { +"description": "Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.", +"enum": [ +"TRIGGER_EVENT_UNSPECIFIED", +"END_OF_UTTERANCE", +"MANUAL_CALL" +], +"enumDescriptions": [ +"Default value for TriggerEvent.", +"Triggers when each chat message or voice utterance ends.", +"Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions." +], +"type": "string" +}, +"updateTime": { +"description": "Output only. Update time of this generator.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2GeneratorSuggestion": { +"description": "Suggestion generated using a Generator.", +"id": "GoogleCloudDialogflowV2GeneratorSuggestion", +"properties": { +"summarySuggestion": { +"$ref": "GoogleCloudDialogflowV2SummarySuggestion", +"description": "Optional. Suggested summary." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2HumanAgentAssistantConfig": { "description": "Defines the Human Agent Assist to connect to a conversation.", "id": "GoogleCloudDialogflowV2HumanAgentAssistantConfig", @@ -15188,6 +15606,13 @@ true }, "type": "array" }, +"generators": { +"description": "Optional. List of various generator resource names used in the conversation profile.", +"items": { +"type": "string" +}, +"type": "array" +}, "groupSuggestionResponses": { "description": "If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse.", "type": "boolean" @@ -15419,7 +15844,7 @@ true "properties": { "livePersonConfig": { "$ref": "GoogleCloudDialogflowV2HumanAgentHandoffConfigLivePersonConfig", -"description": "Uses LivePerson (https://www.liveperson.com)." +"description": "Uses [LivePerson](https://www.liveperson.com)." }, "salesforceLiveAgentConfig": { "$ref": "GoogleCloudDialogflowV2HumanAgentHandoffConfigSalesforceLiveAgentConfig", @@ -15429,7 +15854,7 @@ true "type": "object" }, "GoogleCloudDialogflowV2HumanAgentHandoffConfigLivePersonConfig": { -"description": "Configuration specific to LivePerson (https://www.liveperson.com).", +"description": "Configuration specific to [LivePerson](https://www.liveperson.com).", "id": "GoogleCloudDialogflowV2HumanAgentHandoffConfigLivePersonConfig", "properties": { "accountNumber": { @@ -15600,6 +16025,33 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2InferenceParameter": { +"description": "The parameters of inference.", +"id": "GoogleCloudDialogflowV2InferenceParameter", +"properties": { +"maxOutputTokens": { +"description": "Optional. Maximum number of the output tokens for the generator.", +"format": "int32", +"type": "integer" +}, +"temperature": { +"description": "Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.", +"format": "double", +"type": "number" +}, +"topK": { +"description": "Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.", +"format": "int32", +"type": "integer" +}, +"topP": { +"description": "Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.", +"format": "double", +"type": "number" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2InputAudioConfig": { "description": "Instructs the speech recognizer how to process the audio content.", "id": "GoogleCloudDialogflowV2InputAudioConfig", @@ -16896,6 +17348,24 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2ListGeneratorsResponse": { +"description": "Response of ListGenerators.", +"id": "GoogleCloudDialogflowV2ListGeneratorsResponse", +"properties": { +"generators": { +"description": "List of generators retrieved.", +"items": { +"$ref": "GoogleCloudDialogflowV2Generator" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Token to retrieve the next page of results, or empty if there are no more results in the list.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2ListIntentsResponse": { "description": "The response message for Intents.ListIntents.", "id": "GoogleCloudDialogflowV2ListIntentsResponse", @@ -17095,6 +17565,42 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2MessageEntry": { +"description": "Represents a message entry of a conversation.", +"id": "GoogleCloudDialogflowV2MessageEntry", +"properties": { +"createTime": { +"description": "Optional. Create time of the message entry.", +"format": "google-datetime", +"type": "string" +}, +"languageCode": { +"description": "Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.", +"type": "string" +}, +"role": { +"description": "Optional. Participant role of the message.", +"enum": [ +"ROLE_UNSPECIFIED", +"HUMAN_AGENT", +"AUTOMATED_AGENT", +"END_USER" +], +"enumDescriptions": [ +"Participant role not set.", +"Participant is a human agent.", +"Participant is an automated agent, such as a Dialogflow agent.", +"Participant is an end user that has called or chatted with Dialogflow services." +], +"type": "string" +}, +"text": { +"description": "Optional. Transcript content of the message.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2NotificationConfig": { "description": "Defines notification behavior.", "id": "GoogleCloudDialogflowV2NotificationConfig", @@ -18097,6 +18603,117 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2SummarizationContext": { +"description": "Summarization context that customer can configure.", +"id": "GoogleCloudDialogflowV2SummarizationContext", +"properties": { +"fewShotExamples": { +"description": "Optional. List of few shot examples.", +"items": { +"$ref": "GoogleCloudDialogflowV2FewShotExample" +}, +"type": "array" +}, +"outputLanguageCode": { +"description": "Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.", +"type": "string" +}, +"summarizationSections": { +"description": "Optional. List of sections. Note it contains both predefined section sand customer defined sections.", +"items": { +"$ref": "GoogleCloudDialogflowV2SummarizationSection" +}, +"type": "array" +}, +"version": { +"description": "Optional. Version of the feature. If not set, default to latest version. Current candidates are [\"1.0\"].", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2SummarizationSection": { +"description": "Represents the section of summarization.", +"id": "GoogleCloudDialogflowV2SummarizationSection", +"properties": { +"definition": { +"description": "Optional. Definition of the section, for example, \"what the customer needs help with or has question about.\"", +"type": "string" +}, +"key": { +"description": "Optional. Name of the section, for example, \"situation\".", +"type": "string" +}, +"type": { +"description": "Optional. Type of the summarization section.", +"enum": [ +"TYPE_UNSPECIFIED", +"SITUATION", +"ACTION", +"RESOLUTION", +"REASON_FOR_CANCELLATION", +"CUSTOMER_SATISFACTION", +"ENTITIES", +"CUSTOMER_DEFINED" +], +"enumDescriptions": [ +"Undefined section type, does not return anything.", +"What the customer needs help with or has question about. Section name: \"situation\".", +"What the agent does to help the customer. Section name: \"action\".", +"Result of the customer service. A single word describing the result of the conversation. Section name: \"resolution\".", +"Reason for cancellation if the customer requests for a cancellation. \"N/A\" otherwise. Section name: \"reason_for_cancellation\".", +"\"Unsatisfied\" or \"Satisfied\" depending on the customer's feelings at the end of the conversation. Section name: \"customer_satisfaction\".", +"Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by \"entities/\".", +"Customer defined sections." +], +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2SummarizationSectionList": { +"description": "List of summarization sections.", +"id": "GoogleCloudDialogflowV2SummarizationSectionList", +"properties": { +"summarizationSections": { +"description": "Optional. Summarization sections.", +"items": { +"$ref": "GoogleCloudDialogflowV2SummarizationSection" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2SummarySuggestion": { +"description": "Suggested summary of the conversation.", +"id": "GoogleCloudDialogflowV2SummarySuggestion", +"properties": { +"summarySections": { +"description": "Required. All the parts of generated summary.", +"items": { +"$ref": "GoogleCloudDialogflowV2SummarySuggestionSummarySection" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2SummarySuggestionSummarySection": { +"description": "A component of the generated summary.", +"id": "GoogleCloudDialogflowV2SummarySuggestionSummarySection", +"properties": { +"section": { +"description": "Required. Name of the section.", +"type": "string" +}, +"summary": { +"description": "Required. Summary text for the section.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2SynthesizeSpeechConfig": { "description": "Configuration of how speech should be synthesized.", "id": "GoogleCloudDialogflowV2SynthesizeSpeechConfig", diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json b/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json index 08259753555..75f402d6920 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json @@ -3530,6 +3530,81 @@ } } }, +"generators": { +"methods": { +"create": { +"description": "Creates a generator.", +"flatPath": "v2beta1/projects/{projectsId}/generators", +"httpMethod": "POST", +"id": "dialogflow.projects.generators.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"generatorId": { +"description": "Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to create generator for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+parent}/generators", +"request": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"list": { +"description": "Lists generators.", +"flatPath": "v2beta1/projects/{projectsId}/generators", +"httpMethod": "GET", +"id": "dialogflow.projects.generators.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. Maximum number of conversation models to return in a single page. Default to 10.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. The next_page_token value returned from a previous list request.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to list generators for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+parent}/generators", +"response": { +"$ref": "GoogleCloudDialogflowV2beta1ListGeneratorsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "knowledgeBases": { "methods": { "create": { @@ -6976,6 +7051,168 @@ } } }, +"generators": { +"methods": { +"create": { +"description": "Creates a generator.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/generators", +"httpMethod": "POST", +"id": "dialogflow.projects.locations.generators.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"generatorId": { +"description": "Optional. The ID to use for the generator, which will become the final component of the generator's resource name. The generator ID must be compliant with the regression fomula `a-zA-Z*` with the characters length in range of [3,64]. If the field is not provided, an Id will be auto-generated. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to create generator for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+parent}/generators", +"request": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"delete": { +"description": "Deletes a generator.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "DELETE", +"id": "dialogflow.projects.locations.generators.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The generator resource name to delete. Format: `projects//locations//generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"get": { +"description": "Retrieves a generator.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "GET", +"id": "dialogflow.projects.locations.generators.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The generator resource name to retrieve. Format: `projects//locations/`/generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+name}", +"response": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"list": { +"description": "Lists generators.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/generators", +"httpMethod": "GET", +"id": "dialogflow.projects.locations.generators.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. Maximum number of conversation models to return in a single page. Default to 10.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. The next_page_token value returned from a previous list request.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The project/location to list generators for. Format: `projects//locations/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+parent}/generators", +"response": { +"$ref": "GoogleCloudDialogflowV2beta1ListGeneratorsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +}, +"patch": { +"description": "Updates a generator.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/generators/{generatorsId}", +"httpMethod": "PATCH", +"id": "dialogflow.projects.locations.generators.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/generators/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. The list of fields to update.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v2beta1/{+name}", +"request": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "knowledgeBases": { "methods": { "create": { @@ -7468,6 +7705,39 @@ } } }, +"statelessSuggestion": { +"methods": { +"generate": { +"description": "Generates and returns a suggestion for a conversation that does not have a resource created for it.", +"flatPath": "v2beta1/projects/{projectsId}/locations/{locationsId}/statelessSuggestion:generate", +"httpMethod": "POST", +"id": "dialogflow.projects.locations.statelessSuggestion.generate", +"parameterOrder": [ +"parent" +], +"parameters": { +"parent": { +"description": "Required. The parent resource to charge for the Suggestion's generation. Format: `projects//locations/`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2beta1/{+parent}/statelessSuggestion:generate", +"request": { +"$ref": "GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionRequest" +}, +"response": { +"$ref": "GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/dialogflow" +] +} +} +}, "suggestions": { "methods": { "generateStatelessSummary": { @@ -7695,7 +7965,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { @@ -15608,6 +15878,20 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1ConversationContext": { +"description": "Context of the conversation, including transcripts.", +"id": "GoogleCloudDialogflowV2beta1ConversationContext", +"properties": { +"messageEntries": { +"description": "Optional. List of message transcripts in the conversation.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1MessageEntry" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1ConversationEvent": { "description": "Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.", "id": "GoogleCloudDialogflowV2beta1ConversationEvent", @@ -16226,6 +16510,32 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1FewShotExample": { +"description": "Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response. NEXT_ID: 10", +"id": "GoogleCloudDialogflowV2beta1FewShotExample", +"properties": { +"conversationContext": { +"$ref": "GoogleCloudDialogflowV2beta1ConversationContext", +"description": "Optional. Conversation transcripts." +}, +"extraInfo": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains \"@price\", and ingested data has <\"price\", \"10\">", +"type": "object" +}, +"output": { +"$ref": "GoogleCloudDialogflowV2beta1GeneratorSuggestion", +"description": "Required. Example output of the model." +}, +"summarizationSectionList": { +"$ref": "GoogleCloudDialogflowV2beta1SummarizationSectionList", +"description": "Summarization sections." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1Fulfillment": { "description": "By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday. For more information, see the [fulfillment guide](https://cloud.google.com/dialogflow/docs/fulfillment-overview).", "id": "GoogleCloudDialogflowV2beta1Fulfillment", @@ -16342,6 +16652,53 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionRequest": { +"description": "The request message for Conversations.GenerateStatelessSuggestion.", +"id": "GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionRequest", +"properties": { +"conversationContext": { +"$ref": "GoogleCloudDialogflowV2beta1ConversationContext", +"description": "Optional. Context of the conversation, including transcripts." +}, +"generator": { +"$ref": "GoogleCloudDialogflowV2beta1Generator", +"description": "Uncreated generator. It should be a complete generator that includes all information about the generator." +}, +"generatorName": { +"description": "The resource name of the existing created generator. Format: `projects//locations//generators/`", +"type": "string" +}, +"triggerEvents": { +"description": "Optional. A list of trigger events. Generator will be triggered only if it's trigger event is included here.", +"items": { +"enum": [ +"TRIGGER_EVENT_UNSPECIFIED", +"END_OF_UTTERANCE", +"MANUAL_CALL" +], +"enumDescriptions": [ +"Default value for TriggerEvent.", +"Triggers when each chat message or voice utterance ends.", +"Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionResponse": { +"description": "The response message for Conversations.GenerateStatelessSuggestion.", +"id": "GoogleCloudDialogflowV2beta1GenerateStatelessSuggestionResponse", +"properties": { +"generatorSuggestion": { +"$ref": "GoogleCloudDialogflowV2beta1GeneratorSuggestion", +"description": "Required. Generated suggestion for a conversation." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1GenerateStatelessSummaryRequest": { "description": "The request message for Conversations.GenerateStatelessSummary.", "id": "GoogleCloudDialogflowV2beta1GenerateStatelessSummaryRequest", @@ -16422,6 +16779,67 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1Generator": { +"description": "LLM generator.", +"id": "GoogleCloudDialogflowV2beta1Generator", +"properties": { +"createTime": { +"description": "Output only. Creation time of this generator.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"description": { +"description": "Optional. Human readable description of the generator.", +"type": "string" +}, +"inferenceParameter": { +"$ref": "GoogleCloudDialogflowV2beta1InferenceParameter", +"description": "Optional. Inference parameters for this generator." +}, +"name": { +"description": "Output only. Identifier. The resource name of the generator. Format: `projects//locations//generators/`", +"readOnly": true, +"type": "string" +}, +"summarizationContext": { +"$ref": "GoogleCloudDialogflowV2beta1SummarizationContext", +"description": "Input of prebuilt Summarization feature." +}, +"triggerEvent": { +"description": "Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.", +"enum": [ +"TRIGGER_EVENT_UNSPECIFIED", +"END_OF_UTTERANCE", +"MANUAL_CALL" +], +"enumDescriptions": [ +"Default value for TriggerEvent.", +"Triggers when each chat message or voice utterance ends.", +"Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions." +], +"type": "string" +}, +"updateTime": { +"description": "Output only. Update time of this generator.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1GeneratorSuggestion": { +"description": "Suggestion generated using a Generator.", +"id": "GoogleCloudDialogflowV2beta1GeneratorSuggestion", +"properties": { +"summarySuggestion": { +"$ref": "GoogleCloudDialogflowV2beta1SummarySuggestion", +"description": "Optional. Suggested summary." +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1HumanAgentAssistantConfig": { "description": "Defines the Human Agent Assistant to connect to a conversation.", "id": "GoogleCloudDialogflowV2beta1HumanAgentAssistantConfig", @@ -16498,6 +16916,13 @@ true }, "type": "array" }, +"generators": { +"description": "Optional. List of various generator resource names used in the conversation profile.", +"items": { +"type": "string" +}, +"type": "array" +}, "groupSuggestionResponses": { "description": "If `group_suggestion_responses` is false, and there are multiple `feature_configs` in `event based suggestion` or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or `StreamingAnalyzeContentResponse`. If `group_suggestion_responses` set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse.", "type": "boolean" @@ -16729,7 +17154,7 @@ true "properties": { "livePersonConfig": { "$ref": "GoogleCloudDialogflowV2beta1HumanAgentHandoffConfigLivePersonConfig", -"description": "Uses LivePerson (https://www.liveperson.com)." +"description": "Uses [LivePerson](https://www.liveperson.com)." }, "salesforceLiveAgentConfig": { "$ref": "GoogleCloudDialogflowV2beta1HumanAgentHandoffConfigSalesforceLiveAgentConfig", @@ -16739,7 +17164,7 @@ true "type": "object" }, "GoogleCloudDialogflowV2beta1HumanAgentHandoffConfigLivePersonConfig": { -"description": "Configuration specific to LivePerson (https://www.liveperson.com).", +"description": "Configuration specific to [LivePerson](https://www.liveperson.com).", "id": "GoogleCloudDialogflowV2beta1HumanAgentHandoffConfigLivePersonConfig", "properties": { "accountNumber": { @@ -16862,6 +17287,33 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1InferenceParameter": { +"description": "The parameters of inference.", +"id": "GoogleCloudDialogflowV2beta1InferenceParameter", +"properties": { +"maxOutputTokens": { +"description": "Optional. Maximum number of the output tokens for the generator.", +"format": "int32", +"type": "integer" +}, +"temperature": { +"description": "Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.", +"format": "double", +"type": "number" +}, +"topK": { +"description": "Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.", +"format": "int32", +"type": "integer" +}, +"topP": { +"description": "Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.", +"format": "double", +"type": "number" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1InputAudioConfig": { "description": "Instructs the speech recognizer on how to process the audio content.", "id": "GoogleCloudDialogflowV2beta1InputAudioConfig", @@ -16894,6 +17346,11 @@ true "$ref": "GoogleCloudDialogflowV2beta1BargeInConfig", "description": "Configuration of barge-in behavior during the streaming of input audio." }, +"defaultNoSpeechTimeout": { +"description": "If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself.", +"format": "google-duration", +"type": "string" +}, "disableNoSpeechRecognizedEvent": { "description": "Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.", "type": "boolean" @@ -18452,6 +18909,24 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1ListGeneratorsResponse": { +"description": "Response of ListGenerators.", +"id": "GoogleCloudDialogflowV2beta1ListGeneratorsResponse", +"properties": { +"generators": { +"description": "List of generators retrieved.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1Generator" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Token to retrieve the next page of results, or empty if there are no more results in the list.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1ListIntentsResponse": { "description": "The response message for Intents.ListIntents.", "id": "GoogleCloudDialogflowV2beta1ListIntentsResponse", @@ -18670,6 +19145,42 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1MessageEntry": { +"description": "Represents a message entry of a conversation.", +"id": "GoogleCloudDialogflowV2beta1MessageEntry", +"properties": { +"createTime": { +"description": "Optional. Create time of the message entry.", +"format": "google-datetime", +"type": "string" +}, +"languageCode": { +"description": "Optional. The language of the text. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.", +"type": "string" +}, +"role": { +"description": "Optional. Participant role of the message.", +"enum": [ +"ROLE_UNSPECIFIED", +"HUMAN_AGENT", +"AUTOMATED_AGENT", +"END_USER" +], +"enumDescriptions": [ +"Participant role not set.", +"Participant is a human agent.", +"Participant is an automated agent, such as a Dialogflow agent.", +"Participant is an end user that has called or chatted with Dialogflow services." +], +"type": "string" +}, +"text": { +"description": "Optional. Transcript content of the message.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1NotificationConfig": { "description": "Defines notification behavior.", "id": "GoogleCloudDialogflowV2beta1NotificationConfig", @@ -19915,6 +20426,117 @@ true }, "type": "object" }, +"GoogleCloudDialogflowV2beta1SummarizationContext": { +"description": "Summarization context that customer can configure.", +"id": "GoogleCloudDialogflowV2beta1SummarizationContext", +"properties": { +"fewShotExamples": { +"description": "Optional. List of few shot examples.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1FewShotExample" +}, +"type": "array" +}, +"outputLanguageCode": { +"description": "Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.", +"type": "string" +}, +"summarizationSections": { +"description": "Optional. List of sections. Note it contains both predefined section sand customer defined sections.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1SummarizationSection" +}, +"type": "array" +}, +"version": { +"description": "Optional. Version of the feature. If not set, default to latest version. Current candidates are [\"1.0\"].", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1SummarizationSection": { +"description": "Represents the section of summarization.", +"id": "GoogleCloudDialogflowV2beta1SummarizationSection", +"properties": { +"definition": { +"description": "Optional. Definition of the section, for example, \"what the customer needs help with or has question about.\"", +"type": "string" +}, +"key": { +"description": "Optional. Name of the section, for example, \"situation\".", +"type": "string" +}, +"type": { +"description": "Optional. Type of the summarization section.", +"enum": [ +"TYPE_UNSPECIFIED", +"SITUATION", +"ACTION", +"RESOLUTION", +"REASON_FOR_CANCELLATION", +"CUSTOMER_SATISFACTION", +"ENTITIES", +"CUSTOMER_DEFINED" +], +"enumDescriptions": [ +"Undefined section type, does not return anything.", +"What the customer needs help with or has question about. Section name: \"situation\".", +"What the agent does to help the customer. Section name: \"action\".", +"Result of the customer service. A single word describing the result of the conversation. Section name: \"resolution\".", +"Reason for cancellation if the customer requests for a cancellation. \"N/A\" otherwise. Section name: \"reason_for_cancellation\".", +"\"Unsatisfied\" or \"Satisfied\" depending on the customer's feelings at the end of the conversation. Section name: \"customer_satisfaction\".", +"Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by \"entities/\".", +"Customer defined sections." +], +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1SummarizationSectionList": { +"description": "List of summarization sections.", +"id": "GoogleCloudDialogflowV2beta1SummarizationSectionList", +"properties": { +"summarizationSections": { +"description": "Optional. Summarization sections.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1SummarizationSection" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1SummarySuggestion": { +"description": "Suggested summary of the conversation.", +"id": "GoogleCloudDialogflowV2beta1SummarySuggestion", +"properties": { +"summarySections": { +"description": "Required. All the parts of generated summary.", +"items": { +"$ref": "GoogleCloudDialogflowV2beta1SummarySuggestionSummarySection" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDialogflowV2beta1SummarySuggestionSummarySection": { +"description": "A component of the generated summary.", +"id": "GoogleCloudDialogflowV2beta1SummarySuggestionSummarySection", +"properties": { +"section": { +"description": "Required. Name of the section.", +"type": "string" +}, +"summary": { +"description": "Required. Summary text for the section.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDialogflowV2beta1SynthesizeSpeechConfig": { "description": "Configuration of how speech should be synthesized.", "id": "GoogleCloudDialogflowV2beta1SynthesizeSpeechConfig", diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v3.json b/googleapiclient/discovery_cache/documents/dialogflow.v3.json index d628b51110b..3b01711e641 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v3.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v3.json @@ -4453,7 +4453,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json b/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json index 71c7029b551..b843a2cb219 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json @@ -4551,7 +4551,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { diff --git a/googleapiclient/discovery_cache/documents/digitalassetlinks.v1.json b/googleapiclient/discovery_cache/documents/digitalassetlinks.v1.json index 27b9bd72852..9af9e2aa018 100644 --- a/googleapiclient/discovery_cache/documents/digitalassetlinks.v1.json +++ b/googleapiclient/discovery_cache/documents/digitalassetlinks.v1.json @@ -199,7 +199,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://digitalassetlinks.googleapis.com/", "schemas": { "AndroidAppAsset": { diff --git a/googleapiclient/discovery_cache/documents/discoveryengine.v1.json b/googleapiclient/discovery_cache/documents/discoveryengine.v1.json index 6ece8a9c25a..af7962de27d 100644 --- a/googleapiclient/discovery_cache/documents/discoveryengine.v1.json +++ b/googleapiclient/discovery_cache/documents/discoveryengine.v1.json @@ -106,6 +106,36 @@ "protocol": "rest", "resources": { "projects": { +"methods": { +"provision": { +"description": "Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.", +"flatPath": "v1/projects/{projectsId}:provision", +"httpMethod": "POST", +"id": "discoveryengine.projects.provision", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Full resource name of a Project, such as `projects/{project_id_or_number}`.", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:provision", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1ProvisionProjectRequest" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +}, "resources": { "locations": { "resources": { @@ -355,7 +385,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -742,6 +772,168 @@ } } }, +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1ListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -2201,6 +2393,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1/{+parent}/userEvents:write", @@ -2380,77 +2577,54 @@ } }, "resources": { -"conversations": { +"controls": { "methods": { -"converse": { -"description": "Converses a conversation.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}:converse", -"httpMethod": "POST", -"id": "discoveryengine.projects.locations.collections.engines.conversations.converse", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`. Use `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/-` to activate auto session mode, which automatically creates a new conversation inside a ConverseConversation session.", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", -"required": true, -"type": "string" -} -}, -"path": "v1/{+name}:converse", -"request": { -"$ref": "GoogleCloudDiscoveryengineV1ConverseConversationRequest" -}, -"response": { -"$ref": "GoogleCloudDiscoveryengineV1ConverseConversationResponse" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -}, "create": { -"description": "Creates a Conversation. If the Conversation to create already exists, an ALREADY_EXISTS error is returned.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", "httpMethod": "POST", -"id": "discoveryengine.projects.locations.collections.engines.conversations.create", +"id": "discoveryengine.projects.locations.collections.engines.controls.create", "parameterOrder": [ "parent" ], "parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, "parent": { -"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", "required": true, "type": "string" } }, -"path": "v1/{+parent}/conversations", +"path": "v1/{+parent}/controls", "request": { -"$ref": "GoogleCloudDiscoveryengineV1Conversation" +"$ref": "GoogleCloudDiscoveryengineV1Control" }, "response": { -"$ref": "GoogleCloudDiscoveryengineV1Conversation" +"$ref": "GoogleCloudDiscoveryengineV1Control" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "delete": { -"description": "Deletes a Conversation. If the Conversation to delete does not exist, a NOT_FOUND error is returned.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "DELETE", -"id": "discoveryengine.projects.locations.collections.engines.conversations.delete", +"id": "discoveryengine.projects.locations.collections.engines.controls.delete", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Required. The resource name of the Conversation to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" } @@ -2464,94 +2638,89 @@ ] }, "get": { -"description": "Gets a Conversation.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Gets a Control.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.conversations.get", +"id": "discoveryengine.projects.locations.collections.engines.controls.get", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" } }, "path": "v1/{+name}", "response": { -"$ref": "GoogleCloudDiscoveryengineV1Conversation" +"$ref": "GoogleCloudDiscoveryengineV1Control" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "list": { -"description": "Lists all Conversations by their parent DataStore.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", "httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.conversations.list", +"id": "discoveryengine.projects.locations.collections.engines.controls.list", "parameterOrder": [ "parent" ], "parameters": { "filter": { -"description": "A filter to apply on the list results. The supported features are: user_pseudo_id, state. Example: \"user_pseudo_id = some_id\"", -"location": "query", -"type": "string" -}, -"orderBy": { -"description": "A comma-separated list of fields to order by, sorted in ascending order. Use \"desc\" after a field name for descending. Supported fields: * `update_time` * `create_time` * `conversation_name` Example: \"update_time desc\" \"create_time\"", +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", "location": "query", "type": "string" }, "pageSize": { -"description": "Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", "format": "int32", "location": "query", "type": "integer" }, "pageToken": { -"description": "A page token, received from a previous `ListConversations` call. Provide this to retrieve the subsequent page.", +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", "location": "query", "type": "string" }, "parent": { -"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", "required": true, "type": "string" } }, -"path": "v1/{+parent}/conversations", +"path": "v1/{+parent}/controls", "response": { -"$ref": "GoogleCloudDiscoveryengineV1ListConversationsResponse" +"$ref": "GoogleCloudDiscoveryengineV1ListControlsResponse" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "patch": { -"description": "Updates a Conversation. Conversation action type cannot be changed. If the Conversation to update does not exist, a NOT_FOUND error is returned.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "PATCH", -"id": "discoveryengine.projects.locations.collections.engines.conversations.patch", +"id": "discoveryengine.projects.locations.collections.engines.controls.patch", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Immutable. Fully qualified name `projects/{project}/locations/global/collections/{collection}/dataStore/*/conversations/*` or `projects/{project}/locations/global/collections/{collection}/engines/*/conversations/*`.", +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" }, "updateMask": { -"description": "Indicates which fields in the provided Conversation to update. The following are NOT supported: * Conversation.name If not set or empty, all supported fields are updated.", +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", "format": "google-fieldmask", "location": "query", "type": "string" @@ -2559,10 +2728,10 @@ }, "path": "v1/{+name}", "request": { -"$ref": "GoogleCloudDiscoveryengineV1Conversation" +"$ref": "GoogleCloudDiscoveryengineV1Control" }, "response": { -"$ref": "GoogleCloudDiscoveryengineV1Conversation" +"$ref": "GoogleCloudDiscoveryengineV1Control" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" @@ -2570,40 +2739,230 @@ } } }, -"operations": { +"conversations": { "methods": { -"get": { -"description": "Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations/{operationsId}", -"httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.operations.get", +"converse": { +"description": "Converses a conversation.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}:converse", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.engines.conversations.converse", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "The name of the operation resource.", +"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`. Use `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/-` to activate auto session mode, which automatically creates a new conversation inside a ConverseConversation session.", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/operations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", "required": true, "type": "string" } }, -"path": "v1/{+name}", +"path": "v1/{+name}:converse", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1ConverseConversationRequest" +}, "response": { -"$ref": "GoogleLongrunningOperation" +"$ref": "GoogleCloudDiscoveryengineV1ConverseConversationResponse" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, -"list": { -"description": "Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations", -"httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.operations.list", +"create": { +"description": "Creates a Conversation. If the Conversation to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.engines.conversations.create", "parameterOrder": [ -"name" +"parent" +], +"parameters": { +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/conversations", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Conversation" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Conversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Conversation. If the Conversation to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.engines.conversations.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Conversation to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Conversation.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.conversations.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Conversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Conversations by their parent DataStore.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.conversations.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "A filter to apply on the list results. The supported features are: user_pseudo_id, state. Example: \"user_pseudo_id = some_id\"", +"location": "query", +"type": "string" +}, +"orderBy": { +"description": "A comma-separated list of fields to order by, sorted in ascending order. Use \"desc\" after a field name for descending. Supported fields: * `update_time` * `create_time` * `conversation_name` Example: \"update_time desc\" \"create_time\"", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "A page token, received from a previous `ListConversations` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/conversations", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1ListConversationsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Conversation. Conversation action type cannot be changed. If the Conversation to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.engines.conversations.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/{project}/locations/global/collections/{collection}/dataStore/*/conversations/*` or `projects/{project}/locations/global/collections/{collection}/engines/*/conversations/*`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Indicates which fields in the provided Conversation to update. The following are NOT supported: * Conversation.name If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Conversation" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Conversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, +"operations": { +"methods": { +"get": { +"description": "Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations/{operationsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.operations.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.operations.list", +"parameterOrder": [ +"name" ], "parameters": { "filter": { @@ -3165,7 +3524,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -3552,6 +3911,168 @@ } } }, +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1ListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -4733,6 +5254,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1/{+parent}/userEvents:write", @@ -4901,6 +5427,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1/{+parent}/userEvents:write", @@ -5019,7 +5550,7 @@ } } }, -"revision": "20240517", +"revision": "20240526", "rootUrl": "https://discoveryengine.googleapis.com/", "schemas": { "GoogleApiHttpBody": { @@ -5354,6 +5885,10 @@ "description": "Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead.", "type": "boolean" }, +"ignoreLowRelevantContent": { +"description": "Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service.", +"type": "boolean" +}, "ignoreNonAnswerSeekingQuery": { "description": "Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead.", "type": "boolean" @@ -5490,6 +6025,13 @@ "$ref": "GoogleCloudDiscoveryengineV1SearchRequestBoostSpec", "description": "Boost specification to boost certain documents in search results which may affect the answer query response. For more information on boosting, see [Boosting](https://cloud.google.com/retail/docs/boosting#boost)" }, +"dataStoreSpecs": { +"description": "Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1SearchRequestDataStoreSpec" +}, +"type": "array" +}, "filter": { "description": "The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY(\"king kong\")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)", "type": "string" @@ -5744,6 +6286,14 @@ "description": "Page identifier.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -5770,6 +6320,14 @@ "description": "Document resource name.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -6357,6 +6915,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1Condition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1Condition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1ConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1ConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1Control": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1Control", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1Condition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1ControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1ControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1ControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1Conversation": { "description": "External conversation proto definition.", "id": "GoogleCloudDiscoveryengineV1Conversation", @@ -7132,7 +7884,7 @@ "id": "GoogleCloudDiscoveryengineV1EngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -7624,10 +8376,28 @@ "format": "double", "type": "number" }, -"minimum": { -"description": "Inclusive lower bound.", -"format": "double", -"type": "number" +"minimum": { +"description": "Inclusive lower bound.", +"format": "double", +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ListControlsResponse": { +"description": "Response for ListControls method.", +"id": "GoogleCloudDiscoveryengineV1ListControlsResponse", +"properties": { +"controls": { +"description": "All the Controls for a given data store.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1Control" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Pagination token, if not returned indicates the last page.", +"type": "string" } }, "type": "object" @@ -7828,6 +8598,100 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1Project": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1Project", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1ProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1ProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1ProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProvisionProjectRequest": { +"description": "Request for ProjectService.ProvisionProject method.", +"id": "GoogleCloudDiscoveryengineV1ProvisionProjectRequest", +"properties": { +"acceptDataUseTerms": { +"description": "Required. Set to `true` to specify that caller has read and would like to give consent to the [Terms for data use](https://cloud.google.com/retail/data-use-terms).", +"type": "boolean" +}, +"dataUseTermsVersion": { +"description": "Required. The version of the [Terms for data use](https://cloud.google.com/retail/data-use-terms) that caller has read and would like to give consent to. Acceptable version is `2022-11-23`, and this may change over time.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata", @@ -7978,6 +8842,13 @@ "description": "The number of results to return. If this is unset or no bigger than zero, returns all results.", "format": "int32", "type": "integer" +}, +"userLabels": { +"additionalProperties": { +"type": "string" +}, +"description": "The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details.", +"type": "object" } }, "type": "object" @@ -9241,6 +10112,10 @@ "$ref": "GoogleCloudDiscoveryengineV1CompletionInfo", "description": "CompletionService.CompleteQuery details related to the event. This field should be set for `search` event when autocomplete function is enabled and the user clicks a suggestion for search." }, +"dataStore": { +"description": "The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted.", +"type": "string" +}, "directUserRequest": { "description": "Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent.", "type": "boolean" @@ -9252,6 +10127,10 @@ }, "type": "array" }, +"engine": { +"description": "The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search.", +"type": "string" +}, "eventTime": { "description": "Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened.", "format": "google-datetime", @@ -9565,6 +10444,14 @@ "description": "Page identifier.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -9591,6 +10478,14 @@ "description": "Document resource name.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -9724,69 +10619,263 @@ "description": "Chunk resource name.", "type": "string" }, -"content": { -"description": "Chunk textual content.", +"content": { +"description": "Chunk textual content.", +"type": "string" +}, +"relevanceScore": { +"description": "Relevance score.", +"format": "float", +"type": "number" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaAnswerStepActionObservationSearchResultSnippetInfo": { +"description": "Snippet information.", +"id": "GoogleCloudDiscoveryengineV1alphaAnswerStepActionObservationSearchResultSnippetInfo", +"properties": { +"snippet": { +"description": "Snippet content.", +"type": "string" +}, +"snippetStatus": { +"description": "Status of the snippet defined by the search team.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaAnswerStepActionSearchAction": { +"description": "Search action.", +"id": "GoogleCloudDiscoveryengineV1alphaAnswerStepActionSearchAction", +"properties": { +"query": { +"description": "The query to search.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSiteMetadata": { +"description": "Metadata related to the progress of the SiteSearchEngineService.BatchCreateTargetSites operation. This will be returned by the google.longrunning.Operation.metadata field.", +"id": "GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSiteMetadata", +"properties": { +"createTime": { +"description": "Operation create time.", +"format": "google-datetime", +"type": "string" +}, +"updateTime": { +"description": "Operation last update time. If the operation is done, this is also the finish time.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse": { +"description": "Response message for SiteSearchEngineService.BatchCreateTargetSites method.", +"id": "GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse", +"properties": { +"targetSites": { +"description": "TargetSites created.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaTargetSite" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1alphaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1alphaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], "type": "string" }, -"relevanceScore": { -"description": "Relevance score.", -"format": "float", -"type": "number" +"type": "array" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaAnswerStepActionObservationSearchResultSnippetInfo": { -"description": "Snippet information.", -"id": "GoogleCloudDiscoveryengineV1alphaAnswerStepActionObservationSearchResultSnippetInfo", +"GoogleCloudDiscoveryengineV1alphaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", "properties": { -"snippet": { -"description": "Snippet content.", +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", "type": "string" }, -"snippetStatus": { -"description": "Status of the snippet defined by the search team.", +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaAnswerStepActionSearchAction": { -"description": "Search action.", -"id": "GoogleCloudDiscoveryengineV1alphaAnswerStepActionSearchAction", +"GoogleCloudDiscoveryengineV1alphaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", "properties": { -"query": { -"description": "The query to search.", +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSiteMetadata": { -"description": "Metadata related to the progress of the SiteSearchEngineService.BatchCreateTargetSites operation. This will be returned by the google.longrunning.Operation.metadata field.", -"id": "GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSiteMetadata", +"GoogleCloudDiscoveryengineV1alphaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", "properties": { -"createTime": { -"description": "Operation create time.", -"format": "google-datetime", -"type": "string" -}, -"updateTime": { -"description": "Operation last update time. If the operation is done, this is also the finish time.", -"format": "google-datetime", +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse": { -"description": "Response message for SiteSearchEngineService.BatchCreateTargetSites method.", -"id": "GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse", +"GoogleCloudDiscoveryengineV1alphaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", "properties": { -"targetSites": { -"description": "TargetSites created.", +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", "items": { -"$ref": "GoogleCloudDiscoveryengineV1alphaTargetSite" +"type": "string" }, "type": "array" } @@ -9885,7 +10974,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -9895,6 +10984,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -10375,7 +11465,7 @@ "id": "GoogleCloudDiscoveryengineV1alphaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -10561,12 +11651,14 @@ "enum": [ "ADVANCED_SITE_SEARCH_DATA_SOURCE_UNSPECIFIED", "METATAGS", -"PAGEMAP" +"PAGEMAP", +"SCHEMA_ORG" ], "enumDescriptions": [ "Value used when unset.", "Retrieve value from meta tag.", -"Retrieve value from page map." +"Retrieve value from page map.", +"Retrieve value from schema.org data." ], "type": "string" }, @@ -10676,6 +11768,13 @@ ], "type": "string" }, +"schemaOrgPaths": { +"description": "Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished", +"items": { +"type": "string" +}, +"type": "array" +}, "searchableOption": { "description": "If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error.", "enum": [ @@ -11583,6 +12682,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1betaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1betaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1betaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaCreateDataStoreMetadata": { "description": "Metadata related to the progress of the DataStoreService.CreateDataStore operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1betaCreateDataStoreMetadata", @@ -11675,7 +12968,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -11685,6 +12978,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -12102,7 +13396,7 @@ "id": "GoogleCloudDiscoveryengineV1betaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -12315,6 +13609,85 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaProject": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1betaProject", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata", diff --git a/googleapiclient/discovery_cache/documents/discoveryengine.v1alpha.json b/googleapiclient/discovery_cache/documents/discoveryengine.v1alpha.json index 2ee30ef854f..d5e70de7353 100644 --- a/googleapiclient/discovery_cache/documents/discoveryengine.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/discoveryengine.v1alpha.json @@ -546,7 +546,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -1117,6 +1117,168 @@ } } }, +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -2728,6 +2890,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1alpha/{+parent}/userEvents:write", @@ -2991,6 +3158,168 @@ } }, "resources": { +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.engines.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.engines.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.engines.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -3896,7 +4225,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -4439,11 +4768,173 @@ } } }, -"conversations": { +"controls": { "methods": { -"converse": { -"description": "Converses a conversation.", -"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/conversations/{conversationsId}:converse", +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1alpha/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, +"conversations": { +"methods": { +"converse": { +"description": "Converses a conversation.", +"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/conversations/{conversationsId}:converse", "httpMethod": "POST", "id": "discoveryengine.projects.locations.dataStores.conversations.converse", "parameterOrder": [ @@ -5743,6 +6234,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1alpha/{+parent}/userEvents:write", @@ -5977,6 +6473,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1alpha/{+parent}/userEvents:write", @@ -6067,7 +6568,7 @@ } } }, -"revision": "20240517", +"revision": "20240526", "rootUrl": "https://discoveryengine.googleapis.com/", "schemas": { "GoogleApiHttpBody": { @@ -6247,6 +6748,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1Condition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1Condition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1ConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1ConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1Control": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1Control", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1Condition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1ControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1ControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1ControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1CreateDataStoreMetadata": { "description": "Metadata related to the progress of the DataStoreService.CreateDataStore operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1CreateDataStoreMetadata", @@ -6717,7 +7412,7 @@ "id": "GoogleCloudDiscoveryengineV1EngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -6916,6 +7611,85 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1Project": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1Project", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1ProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1ProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1ProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata", @@ -7391,6 +8165,10 @@ "description": "Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead.", "type": "boolean" }, +"ignoreLowRelevantContent": { +"description": "Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service.", +"type": "boolean" +}, "ignoreNonAnswerSeekingQuery": { "description": "Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead.", "type": "boolean" @@ -7536,6 +8314,13 @@ "$ref": "GoogleCloudDiscoveryengineV1alphaCustomFineTuningSpec", "description": "Custom fine tuning configs." }, +"dataStoreSpecs": { +"description": "Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaSearchRequestDataStoreSpec" +}, +"type": "array" +}, "filter": { "description": "The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY(\"king kong\")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)", "type": "string" @@ -7804,6 +8589,14 @@ "description": "Page identifier.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -7830,6 +8623,14 @@ "description": "Document resource name.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -8373,8 +9174,9 @@ "description": "Page span of the chunk." }, "relevanceScore": { -"description": "Represents the relevance score based on similarity. Higher score represents the chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse", +"description": "Output only. Represents the relevance score based on similarity. Higher score indicates higher chunk relevance. The score is in range [-1.0, 1.0]. Only populated on SearchService.SearchResponse.", "format": "double", +"readOnly": true, "type": "number" } }, @@ -8490,36 +9292,230 @@ }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaCompleteQueryResponseQuerySuggestion": { -"description": "Suggestions as search queries.", -"id": "GoogleCloudDiscoveryengineV1alphaCompleteQueryResponseQuerySuggestion", +"GoogleCloudDiscoveryengineV1alphaCompleteQueryResponseQuerySuggestion": { +"description": "Suggestions as search queries.", +"id": "GoogleCloudDiscoveryengineV1alphaCompleteQueryResponseQuerySuggestion", +"properties": { +"completableFieldPaths": { +"description": "The unique document field paths that serve as the source of this suggestion if it was generated from completable fields. This field is only populated for the document-completable model.", +"items": { +"type": "string" +}, +"type": "array" +}, +"suggestion": { +"description": "The suggestion for the query.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaCompletionInfo": { +"description": "Detailed completion information including completion attribution token and clicked completion info.", +"id": "GoogleCloudDiscoveryengineV1alphaCompletionInfo", +"properties": { +"selectedPosition": { +"description": "End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0.", +"format": "int32", +"type": "integer" +}, +"selectedSuggestion": { +"description": "End user selected CompleteQueryResponse.QuerySuggestion.suggestion.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1alphaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1alphaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", "properties": { -"completableFieldPaths": { -"description": "The unique document field paths that serve as the source of this suggestion if it was generated from completable fields. This field is only populated for the document-completable model.", -"items": { -"type": "string" -}, -"type": "array" -}, -"suggestion": { -"description": "The suggestion for the query.", +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1alphaCompletionInfo": { -"description": "Detailed completion information including completion attribution token and clicked completion info.", -"id": "GoogleCloudDiscoveryengineV1alphaCompletionInfo", +"GoogleCloudDiscoveryengineV1alphaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", "properties": { -"selectedPosition": { -"description": "End user selected CompleteQueryResponse.QuerySuggestion.suggestion position, starting from 0.", -"format": "int32", -"type": "integer" -}, -"selectedSuggestion": { -"description": "End user selected CompleteQueryResponse.QuerySuggestion.suggestion.", +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { "type": "string" +}, +"type": "array" } }, "type": "object" @@ -8822,7 +9818,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -8832,6 +9828,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -9487,7 +10484,7 @@ "id": "GoogleCloudDiscoveryengineV1alphaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -9801,12 +10798,14 @@ "enum": [ "ADVANCED_SITE_SEARCH_DATA_SOURCE_UNSPECIFIED", "METATAGS", -"PAGEMAP" +"PAGEMAP", +"SCHEMA_ORG" ], "enumDescriptions": [ "Value used when unset.", "Retrieve value from meta tag.", -"Retrieve value from page map." +"Retrieve value from page map.", +"Retrieve value from schema.org data." ], "type": "string" }, @@ -9916,6 +10915,13 @@ ], "type": "string" }, +"schemaOrgPaths": { +"description": "Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished", +"items": { +"type": "string" +}, +"type": "array" +}, "searchableOption": { "description": "If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error.", "enum": [ @@ -10400,6 +11406,24 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1alphaListControlsResponse": { +"description": "Response for ListControls method.", +"id": "GoogleCloudDiscoveryengineV1alphaListControlsResponse", +"properties": { +"controls": { +"description": "All the Controls for a given data store.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControl" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Pagination token, if not returned indicates the last page.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1alphaListConversationsResponse": { "description": "Response for ListConversations method.", "id": "GoogleCloudDiscoveryengineV1alphaListConversationsResponse", @@ -10981,6 +12005,13 @@ "description": "The number of results to return. If this is unset or no bigger than zero, returns all results.", "format": "int32", "type": "integer" +}, +"userLabels": { +"additionalProperties": { +"type": "string" +}, +"description": "The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details.", +"type": "object" } }, "type": "object" @@ -11588,7 +12619,7 @@ "description": "If there is no extractive_content_spec provided, there will be no extractive answer in the search response." }, "searchResultMode": { -"description": "Specifies the search result mode. If unspecified, the search result mode is based on DataStore.DocumentProcessingConfig.chunking_config: * If DataStore.DocumentProcessingConfig.chunking_config is specified, it defaults to `CHUNKS`. * Otherwise, it defaults to `DOCUMENTS`.", +"description": "Specifies the search result mode. If unspecified, the search result mode defaults to `DOCUMENTS`.", "enum": [ "SEARCH_RESULT_MODE_UNSPECIFIED", "DOCUMENTS", @@ -12945,6 +13976,10 @@ "$ref": "GoogleCloudDiscoveryengineV1alphaCompletionInfo", "description": "CompletionService.CompleteQuery details related to the event. This field should be set for `search` event when autocomplete function is enabled and the user clicks a suggestion for search." }, +"dataStore": { +"description": "The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted.", +"type": "string" +}, "directUserRequest": { "description": "Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent.", "type": "boolean" @@ -12956,6 +13991,10 @@ }, "type": "array" }, +"engine": { +"description": "The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search.", +"type": "string" +}, "eventTime": { "description": "Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened.", "format": "google-datetime", @@ -13064,6 +14103,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1betaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1betaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1betaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaCreateDataStoreMetadata": { "description": "Metadata related to the progress of the DataStoreService.CreateDataStore operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1betaCreateDataStoreMetadata", @@ -13156,7 +14389,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -13166,6 +14399,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -13583,7 +14817,7 @@ "id": "GoogleCloudDiscoveryengineV1betaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -13796,6 +15030,85 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaProject": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1betaProject", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata", diff --git a/googleapiclient/discovery_cache/documents/discoveryengine.v1beta.json b/googleapiclient/discovery_cache/documents/discoveryengine.v1beta.json index 86764c4cb40..6799652a25d 100644 --- a/googleapiclient/discovery_cache/documents/discoveryengine.v1beta.json +++ b/googleapiclient/discovery_cache/documents/discoveryengine.v1beta.json @@ -106,6 +106,36 @@ "protocol": "rest", "resources": { "projects": { +"methods": { +"provision": { +"description": "Provisions the project resource. During the process, related systems will get prepared and initialized. Caller must read the [Terms for data use](https://cloud.google.com/retail/data-use-terms), and optionally specify in request to provide consent to that service terms.", +"flatPath": "v1beta/projects/{projectsId}:provision", +"httpMethod": "POST", +"id": "discoveryengine.projects.provision", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Full resource name of a Project, such as `projects/{project_id_or_number}`.", +"location": "path", +"pattern": "^projects/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}:provision", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaProvisionProjectRequest" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +}, "resources": { "locations": { "resources": { @@ -355,7 +385,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -770,6 +800,168 @@ } } }, +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -2353,6 +2545,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1beta/{+parent}/userEvents:write", @@ -2616,77 +2813,54 @@ } }, "resources": { -"conversations": { +"controls": { "methods": { -"converse": { -"description": "Converses a conversation.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}:converse", -"httpMethod": "POST", -"id": "discoveryengine.projects.locations.collections.engines.conversations.converse", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`. Use `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/-` to activate auto session mode, which automatically creates a new conversation inside a ConverseConversation session.", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", -"required": true, -"type": "string" -} -}, -"path": "v1beta/{+name}:converse", -"request": { -"$ref": "GoogleCloudDiscoveryengineV1betaConverseConversationRequest" -}, -"response": { -"$ref": "GoogleCloudDiscoveryengineV1betaConverseConversationResponse" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -}, "create": { -"description": "Creates a Conversation. If the Conversation to create already exists, an ALREADY_EXISTS error is returned.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", "httpMethod": "POST", -"id": "discoveryengine.projects.locations.collections.engines.conversations.create", +"id": "discoveryengine.projects.locations.collections.engines.controls.create", "parameterOrder": [ "parent" ], "parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, "parent": { -"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", "required": true, "type": "string" } }, -"path": "v1beta/{+parent}/conversations", +"path": "v1beta/{+parent}/controls", "request": { -"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +"$ref": "GoogleCloudDiscoveryengineV1betaControl" }, "response": { -"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +"$ref": "GoogleCloudDiscoveryengineV1betaControl" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "delete": { -"description": "Deletes a Conversation. If the Conversation to delete does not exist, a NOT_FOUND error is returned.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "DELETE", -"id": "discoveryengine.projects.locations.collections.engines.conversations.delete", +"id": "discoveryengine.projects.locations.collections.engines.controls.delete", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Required. The resource name of the Conversation to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" } @@ -2700,94 +2874,89 @@ ] }, "get": { -"description": "Gets a Conversation.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Gets a Control.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.conversations.get", +"id": "discoveryengine.projects.locations.collections.engines.controls.get", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" } }, "path": "v1beta/{+name}", "response": { -"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +"$ref": "GoogleCloudDiscoveryengineV1betaControl" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "list": { -"description": "Lists all Conversations by their parent DataStore.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls", "httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.conversations.list", +"id": "discoveryengine.projects.locations.collections.engines.controls.list", "parameterOrder": [ "parent" ], "parameters": { "filter": { -"description": "A filter to apply on the list results. The supported features are: user_pseudo_id, state. Example: \"user_pseudo_id = some_id\"", -"location": "query", -"type": "string" -}, -"orderBy": { -"description": "A comma-separated list of fields to order by, sorted in ascending order. Use \"desc\" after a field name for descending. Supported fields: * `update_time` * `create_time` * `conversation_name` Example: \"update_time desc\" \"create_time\"", +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", "location": "query", "type": "string" }, "pageSize": { -"description": "Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", "format": "int32", "location": "query", "type": "integer" }, "pageToken": { -"description": "A page token, received from a previous `ListConversations` call. Provide this to retrieve the subsequent page.", +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", "location": "query", "type": "string" }, "parent": { -"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", "required": true, "type": "string" } }, -"path": "v1beta/{+parent}/conversations", +"path": "v1beta/{+parent}/controls", "response": { -"$ref": "GoogleCloudDiscoveryengineV1betaListConversationsResponse" +"$ref": "GoogleCloudDiscoveryengineV1betaListControlsResponse" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, "patch": { -"description": "Updates a Conversation. Conversation action type cannot be changed. If the Conversation to update does not exist, a NOT_FOUND error is returned.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/controls/{controlsId}", "httpMethod": "PATCH", -"id": "discoveryengine.projects.locations.collections.engines.conversations.patch", +"id": "discoveryengine.projects.locations.collections.engines.controls.patch", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "Immutable. Fully qualified name `projects/{project}/locations/global/collections/{collection}/dataStore/*/conversations/*` or `projects/{project}/locations/global/collections/{collection}/engines/*/conversations/*`.", +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/controls/[^/]+$", "required": true, "type": "string" }, "updateMask": { -"description": "Indicates which fields in the provided Conversation to update. The following are NOT supported: * Conversation.name If not set or empty, all supported fields are updated.", +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", "format": "google-fieldmask", "location": "query", "type": "string" @@ -2795,10 +2964,10 @@ }, "path": "v1beta/{+name}", "request": { -"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +"$ref": "GoogleCloudDiscoveryengineV1betaControl" }, "response": { -"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +"$ref": "GoogleCloudDiscoveryengineV1betaControl" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" @@ -2806,40 +2975,230 @@ } } }, -"operations": { +"conversations": { "methods": { -"get": { -"description": "Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations/{operationsId}", -"httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.operations.get", +"converse": { +"description": "Converses a conversation.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}:converse", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.engines.conversations.converse", "parameterOrder": [ "name" ], "parameters": { "name": { -"description": "The name of the operation resource.", +"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`. Use `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/-` to activate auto session mode, which automatically creates a new conversation inside a ConverseConversation session.", "location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/operations/[^/]+$", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", "required": true, "type": "string" } }, -"path": "v1beta/{+name}", +"path": "v1beta/{+name}:converse", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaConverseConversationRequest" +}, "response": { -"$ref": "GoogleLongrunningOperation" +"$ref": "GoogleCloudDiscoveryengineV1betaConverseConversationResponse" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] }, -"list": { -"description": "Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations", -"httpMethod": "GET", -"id": "discoveryengine.projects.locations.collections.engines.operations.list", +"create": { +"description": "Creates a Conversation. If the Conversation to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.collections.engines.conversations.create", "parameterOrder": [ -"name" +"parent" +], +"parameters": { +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/conversations", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Conversation. If the Conversation to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.collections.engines.conversations.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Conversation to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Conversation.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.conversations.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Conversation to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}/conversations/{conversation_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Conversations by their parent DataStore.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.conversations.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "A filter to apply on the list results. The supported features are: user_pseudo_id, state. Example: \"user_pseudo_id = some_id\"", +"location": "query", +"type": "string" +}, +"orderBy": { +"description": "A comma-separated list of fields to order by, sorted in ascending order. Use \"desc\" after a field name for descending. Supported fields: * `update_time` * `create_time` * `conversation_name` Example: \"update_time desc\" \"create_time\"", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "A page token, received from a previous `ListConversations` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/conversations", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaListConversationsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Conversation. Conversation action type cannot be changed. If the Conversation to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/conversations/{conversationsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.collections.engines.conversations.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/{project}/locations/global/collections/{collection}/dataStore/*/conversations/*` or `projects/{project}/locations/global/collections/{collection}/engines/*/conversations/*`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/conversations/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Indicates which fields in the provided Conversation to update. The following are NOT supported: * Conversation.name If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaConversation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, +"operations": { +"methods": { +"get": { +"description": "Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations/{operationsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.operations.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/collections/[^/]+/engines/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/collections/{collectionsId}/engines/{enginesId}/operations", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.collections.engines.operations.list", +"parameterOrder": [ +"name" ], "parameters": { "filter": { @@ -3496,7 +3855,7 @@ ], "parameters": { "filter": { -"description": "Filter by solution type . For example: filter = 'solution_type:SOLUTION_TYPE_SEARCH'", +"description": "Filter by solution type . For example: `filter = 'solution_type:SOLUTION_TYPE_SEARCH'`", "location": "query", "type": "string" }, @@ -3883,6 +4242,168 @@ } } }, +"controls": { +"methods": { +"create": { +"description": "Creates a Control. By default 1000 controls are allowed for a data store. A request can be submitted to adjust this limit. If the Control to create already exists, an ALREADY_EXISTS error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "POST", +"id": "discoveryengine.projects.locations.dataStores.controls.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"controlId": { +"description": "Required. The ID to use for the Control, which will become the final component of the Control's resource name. This value must be within 1-63 characters. Valid characters are /a-z-_/.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Full resource name of parent data store. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/controls", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a Control. If the Control to delete does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "DELETE", +"id": "discoveryengine.projects.locations.dataStores.controls.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to delete. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets a Control.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The resource name of the Control to get. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}/controls/{control_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+name}", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all Controls by their parent DataStore.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls", +"httpMethod": "GET", +"id": "discoveryengine.projects.locations.dataStores.controls.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. A filter to apply on the list results. Supported features: * List all the products under the parent branch if filter is unset. Currently this field is unsupported.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. Maximum number of results to return. If unspecified, defaults to 50. Max allowed value is 1000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A page token, received from a previous `ListControls` call. Provide this to retrieve the subsequent page.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The data store resource name. Format: `projects/{project_number}/locations/{location_id}/collections/{collection_id}/dataStores/{data_store_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/controls", +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaListControlsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"patch": { +"description": "Updates a Control. Control action type cannot be changed. If the Control to update does not exist, a NOT_FOUND error is returned.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/dataStores/{dataStoresId}/controls/{controlsId}", +"httpMethod": "PATCH", +"id": "discoveryengine.projects.locations.dataStores.controls.patch", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+/controls/[^/]+$", +"required": true, +"type": "string" +}, +"updateMask": { +"description": "Optional. Indicates which fields in the provided Control to update. The following are NOT supported: * Control.name * Control.solution_type If not set or empty, all supported fields are updated.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +} +}, +"path": "v1beta/{+name}", +"request": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"response": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "conversations": { "methods": { "converse": { @@ -5159,6 +5680,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+/dataStores/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1beta/{+parent}/userEvents:write", @@ -5327,6 +5853,11 @@ "pattern": "^projects/[^/]+/locations/[^/]+$", "required": true, "type": "string" +}, +"writeAsync": { +"description": "If set to true, the user event is written asynchronously after validation, and the API responds without waiting for the write.", +"location": "query", +"type": "boolean" } }, "path": "v1beta/{+parent}/userEvents:write", @@ -5417,7 +5948,7 @@ } } }, -"revision": "20240517", +"revision": "20240526", "rootUrl": "https://discoveryengine.googleapis.com/", "schemas": { "GoogleApiHttpBody": { @@ -5537,60 +6068,254 @@ "description": "The operation resource name of the LRO.", "type": "string" }, -"userEvent": { -"description": "The detailed content which caused the error on importing a user event.", -"type": "string" +"userEvent": { +"description": "The detailed content which caused the error on importing a user event.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineLoggingServiceContext": { +"description": "Describes a running service that sends errors.", +"id": "GoogleCloudDiscoveryengineLoggingServiceContext", +"properties": { +"service": { +"description": "An identifier of the service\u2014for example, `discoveryengine.googleapis.com`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineLoggingSourceLocation": { +"description": "Indicates a location in the source code of the service for which errors are reported.", +"id": "GoogleCloudDiscoveryengineLoggingSourceLocation", +"properties": { +"functionName": { +"description": "Human-readable name of a function or method\u2014for example, `google.cloud.discoveryengine.v1alpha.RecommendationService.Recommend`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1BatchCreateTargetSiteMetadata": { +"description": "Metadata related to the progress of the SiteSearchEngineService.BatchCreateTargetSites operation. This will be returned by the google.longrunning.Operation.metadata field.", +"id": "GoogleCloudDiscoveryengineV1BatchCreateTargetSiteMetadata", +"properties": { +"createTime": { +"description": "Operation create time.", +"format": "google-datetime", +"type": "string" +}, +"updateTime": { +"description": "Operation last update time. If the operation is done, this is also the finish time.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1BatchCreateTargetSitesResponse": { +"description": "Response message for SiteSearchEngineService.BatchCreateTargetSites method.", +"id": "GoogleCloudDiscoveryengineV1BatchCreateTargetSitesResponse", +"properties": { +"targetSites": { +"description": "TargetSites created.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1TargetSite" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1Condition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1Condition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1ConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1ConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1ConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1Control": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1Control", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1Condition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" } }, "type": "object" }, -"GoogleCloudDiscoveryengineLoggingServiceContext": { -"description": "Describes a running service that sends errors.", -"id": "GoogleCloudDiscoveryengineLoggingServiceContext", +"GoogleCloudDiscoveryengineV1ControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1ControlBoostAction", "properties": { -"service": { -"description": "An identifier of the service\u2014for example, `discoveryengine.googleapis.com`.", +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineLoggingSourceLocation": { -"description": "Indicates a location in the source code of the service for which errors are reported.", -"id": "GoogleCloudDiscoveryengineLoggingSourceLocation", +"GoogleCloudDiscoveryengineV1ControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1ControlFilterAction", "properties": { -"functionName": { -"description": "Human-readable name of a function or method\u2014for example, `google.cloud.discoveryengine.v1alpha.RecommendationService.Recommend`.", +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1BatchCreateTargetSiteMetadata": { -"description": "Metadata related to the progress of the SiteSearchEngineService.BatchCreateTargetSites operation. This will be returned by the google.longrunning.Operation.metadata field.", -"id": "GoogleCloudDiscoveryengineV1BatchCreateTargetSiteMetadata", +"GoogleCloudDiscoveryengineV1ControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1ControlRedirectAction", "properties": { -"createTime": { -"description": "Operation create time.", -"format": "google-datetime", -"type": "string" -}, -"updateTime": { -"description": "Operation last update time. If the operation is done, this is also the finish time.", -"format": "google-datetime", +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", "type": "string" } }, "type": "object" }, -"GoogleCloudDiscoveryengineV1BatchCreateTargetSitesResponse": { -"description": "Response message for SiteSearchEngineService.BatchCreateTargetSites method.", -"id": "GoogleCloudDiscoveryengineV1BatchCreateTargetSitesResponse", +"GoogleCloudDiscoveryengineV1ControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1ControlSynonymsAction", "properties": { -"targetSites": { -"description": "TargetSites created.", +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", "items": { -"$ref": "GoogleCloudDiscoveryengineV1TargetSite" +"type": "string" }, "type": "array" } @@ -6067,7 +6792,7 @@ "id": "GoogleCloudDiscoveryengineV1EngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -6266,6 +6991,85 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1Project": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1Project", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1ProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1ProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1ProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1ProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata", @@ -6776,6 +7580,14 @@ "description": "Page identifier.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -6802,6 +7614,14 @@ "description": "Document resource name.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -7004,6 +7824,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1alphaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1alphaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1alphaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1alphaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1alphaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1alphaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1alphaControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1alphaControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1alphaControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1alphaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1alphaControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1alphaCreateDataStoreMetadata": { "description": "Metadata related to the progress of the DataStoreService.CreateDataStore operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1alphaCreateDataStoreMetadata", @@ -7096,7 +8110,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -7106,6 +8120,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -7586,7 +8601,7 @@ "id": "GoogleCloudDiscoveryengineV1alphaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -7772,12 +8787,14 @@ "enum": [ "ADVANCED_SITE_SEARCH_DATA_SOURCE_UNSPECIFIED", "METATAGS", -"PAGEMAP" +"PAGEMAP", +"SCHEMA_ORG" ], "enumDescriptions": [ "Value used when unset.", "Retrieve value from meta tag.", -"Retrieve value from page map." +"Retrieve value from page map.", +"Retrieve value from schema.org data." ], "type": "string" }, @@ -7887,6 +8904,13 @@ ], "type": "string" }, +"schemaOrgPaths": { +"description": "Field paths for indexing custom attribute from schema.org data. More details of schema.org and its defined types can be found at [schema.org](https://schema.org). It is only used on advanced site search schema. Currently only support full path from root. The full path to a field is constructed by concatenating field names, starting from `_root`, with a period `.` as the delimiter. Examples: * Publish date of the root: _root.datePublished * Publish date of the reviews: _root.review.datePublished", +"items": { +"type": "string" +}, +"type": "array" +}, "searchableOption": { "description": "If searchable_option is SEARCHABLE_ENABLED, field values are searchable by text queries in SearchService.Search. If SEARCHABLE_ENABLED but field type is numerical, field values will not be searchable by text queries in SearchService.Search, as there are no text values associated to numerical fields. If searchable_option is unset, the server behavior defaults to SEARCHABLE_DISABLED for fields that support setting searchable options. Only `string` fields that have no key property mapping support setting searchable_option. For those fields that do not support setting searchable options, the server will skip searchable option setting, and setting searchable_option for those fields will throw `INVALID_ARGUMENT` error.", "enum": [ @@ -8949,6 +9973,10 @@ "description": "Specifies whether to filter out adversarial queries. The default value is `false`. Google employs search-query classification to detect adversarial queries. No answer is returned if the search query is classified as an adversarial query. For example, a user might ask a question regarding negative comments about the company or submit a query designed to generate unsafe, policy-violating output. If this field is set to `true`, we skip generating answers for adversarial queries and return fallback messages instead.", "type": "boolean" }, +"ignoreLowRelevantContent": { +"description": "Specifies whether to filter out queries that have low relevance. If this field is set to `false`, all search results are used regardless of relevance to generate answers. If set to `true` or unset, the behavior will be determined automatically by the service.", +"type": "boolean" +}, "ignoreNonAnswerSeekingQuery": { "description": "Specifies whether to filter out queries that are not answer-seeking. The default value is `false`. Google employs search-query classification to detect answer-seeking queries. No answer is returned if the search query is classified as a non-answer seeking query. If this field is set to `true`, we skip generating answers for non-answer seeking queries and return fallback messages instead.", "type": "boolean" @@ -9090,6 +10118,13 @@ "$ref": "GoogleCloudDiscoveryengineV1betaSearchRequestBoostSpec", "description": "Boost specification to boost certain documents in search results which may affect the answer query response. For more information on boosting, see [Boosting](https://cloud.google.com/retail/docs/boosting#boost)" }, +"dataStoreSpecs": { +"description": "Specs defining dataStores to filter on in a search call and configurations for those dataStores. This is only considered for engines with multiple dataStores use case. For single dataStore within an engine, they should use the specs at the top level.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaSearchRequestDataStoreSpec" +}, +"type": "array" +}, "filter": { "description": "The filter syntax consists of an expression language for constructing a predicate from one or more fields of the documents being filtered. Filter expression is case-sensitive. This will be used to filter search results which may affect the Answer response. If this field is unrecognizable, an `INVALID_ARGUMENT` is returned. Filtering in Vertex AI Search is done by mapping the LHS filter key to a key property defined in the Vertex AI Search backend -- this mapping is defined by the customer in their schema. For example a media customers might have a field 'name' in their schema. In this case the filter would look like this: filter --> name:'ANY(\"king kong\")' For more information about filtering including syntax and filter operators, see [Filter](https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata)", "type": "string" @@ -9344,6 +10379,14 @@ "description": "Page identifier.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -9370,6 +10413,14 @@ "description": "Document resource name.", "type": "string" }, +"structData": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "The structured JSON metadata for the document. It is populated from the struct data from the Chunk in search result.", +"type": "object" +}, "title": { "description": "Title.", "type": "string" @@ -9957,6 +11008,200 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaCondition": { +"description": "Defines circumstances to be checked before allowing a behavior", +"id": "GoogleCloudDiscoveryengineV1betaCondition", +"properties": { +"activeTimeRange": { +"description": "Range of time(s) specifying when condition is active. Maximum of 10 time ranges.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionTimeRange" +}, +"type": "array" +}, +"queryTerms": { +"description": "Search only A list of terms to match the query on. Maximum of 10 query terms.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionQueryTerm": { +"description": "Matcher for search request query", +"id": "GoogleCloudDiscoveryengineV1betaConditionQueryTerm", +"properties": { +"fullMatch": { +"description": "Whether the search query needs to exactly match the query term.", +"type": "boolean" +}, +"value": { +"description": "The specific query value to match against Must be lowercase, must be UTF-8. Can have at most 3 space separated terms if full_match is true. Cannot be an empty string. Maximum length of 5000 characters.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaConditionTimeRange": { +"description": "Used for time-dependent conditions.", +"id": "GoogleCloudDiscoveryengineV1betaConditionTimeRange", +"properties": { +"endTime": { +"description": "End of time range. Range is inclusive. Must be in the future.", +"format": "google-datetime", +"type": "string" +}, +"startTime": { +"description": "Start of time range. Range is inclusive.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControl": { +"description": "Defines a conditioned behavior to employ during serving. Must be attached to a ServingConfig to be considered at serving time. Permitted actions dependent on `SolutionType`.", +"id": "GoogleCloudDiscoveryengineV1betaControl", +"properties": { +"associatedServingConfigIds": { +"description": "Output only. List of all ServingConfig ids this control is attached to. May take up to 10 minutes to update after changes.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, +"boostAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"description": "Defines a boost-type control" +}, +"conditions": { +"description": "Determines when the associated action will trigger. Omit to always apply the action. Currently only a single condition may be specified. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaCondition" +}, +"type": "array" +}, +"displayName": { +"description": "Required. Human readable name. The identifier used in UI views. Must be UTF-8 encoded string. Length limit is 128 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +}, +"filterAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"description": "Defines a filter-type control Currently not supported by Recommendation" +}, +"name": { +"description": "Immutable. Fully qualified name `projects/*/locations/global/dataStore/*/controls/*`", +"type": "string" +}, +"redirectAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"description": "Defines a redirect-type control." +}, +"solutionType": { +"description": "Required. Immutable. What solution the control belongs to. Must be compatible with vertical of resource. Otherwise an INVALID ARGUMENT error is thrown.", +"enum": [ +"SOLUTION_TYPE_UNSPECIFIED", +"SOLUTION_TYPE_RECOMMENDATION", +"SOLUTION_TYPE_SEARCH", +"SOLUTION_TYPE_CHAT", +"SOLUTION_TYPE_GENERATIVE_CHAT" +], +"enumDescriptions": [ +"Default value.", +"Used for Recommendations AI.", +"Used for Discovery Search.", +"Used for use cases related to the Generative AI agent.", +"Used for use cases related to the Generative Chat agent. It's used for Generative chat engine only, the associated data stores must enrolled with `SOLUTION_TYPE_CHAT` solution." +], +"type": "string" +}, +"synonymsAction": { +"$ref": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"description": "Treats a group of terms as synonyms of one another." +}, +"useCases": { +"description": "Specifies the use case for the control. Affects what condition fields can be set. Only applies to SOLUTION_TYPE_SEARCH. Currently only allow one use case per control. Must be set when solution_type is SolutionType.SOLUTION_TYPE_SEARCH.", +"items": { +"enum": [ +"SEARCH_USE_CASE_UNSPECIFIED", +"SEARCH_USE_CASE_SEARCH", +"SEARCH_USE_CASE_BROWSE" +], +"enumDescriptions": [ +"Value used when unset. Will not occur in CSS.", +"Search use case. Expects the traffic has a non-empty query.", +"Browse use case. Expects the traffic has an empty query." +], +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlBoostAction": { +"description": "Adjusts order of products in returned list.", +"id": "GoogleCloudDiscoveryengineV1betaControlBoostAction", +"properties": { +"boost": { +"description": "Required. Strength of the boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0 (No-op).", +"format": "float", +"type": "number" +}, +"dataStore": { +"description": "Required. Specifies which data store's documents can be boosted by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. Specifies which products to apply the boost to. If no filter is provided all products will be boosted (No-op). Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlFilterAction": { +"description": "Specified which products may be included in results. Uses same filter as boost.", +"id": "GoogleCloudDiscoveryengineV1betaControlFilterAction", +"properties": { +"dataStore": { +"description": "Required. Specifies which data store's documents can be filtered by this control. Full data store name e.g. projects/123/locations/global/collections/default_collection/dataStores/default_data_store", +"type": "string" +}, +"filter": { +"description": "Required. A filter to apply on the matching condition results. Required Syntax documentation: https://cloud.google.com/retail/docs/filter-and-order Maximum length is 5000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlRedirectAction": { +"description": "Redirects a shopper to the provided URI.", +"id": "GoogleCloudDiscoveryengineV1betaControlRedirectAction", +"properties": { +"redirectUri": { +"description": "Required. The URI to which the shopper will be redirected. Required. URI must have length equal or less than 2000 characters. Otherwise an INVALID ARGUMENT error is thrown.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaControlSynonymsAction": { +"description": "Creates a set of terms that will act as synonyms of one another. Example: \"happy\" will also be considered as \"glad\", \"glad\" will also be considered as \"happy\".", +"id": "GoogleCloudDiscoveryengineV1betaControlSynonymsAction", +"properties": { +"synonyms": { +"description": "Defines a set of synonyms. Can specify up to 100 synonyms. Must specify at least 2 synonyms. Otherwise an INVALID ARGUMENT error is thrown.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaConversation": { "description": "External conversation proto definition.", "id": "GoogleCloudDiscoveryengineV1betaConversation", @@ -10244,7 +11489,7 @@ "TRAINING_FAILED" ], "enumDescriptions": [ -"", +"Default value.", "The model is in a paused training state.", "The model is currently training.", "The model has successfully completed training.", @@ -10254,6 +11499,7 @@ "type": "string" }, "modelVersion": { +"description": "The version of the model.", "format": "int64", "type": "string" }, @@ -10814,7 +12060,7 @@ "id": "GoogleCloudDiscoveryengineV1betaEngineCommonConfig", "properties": { "companyName": { -"description": "Immutable. The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", +"description": "The name of the company, business or entity that is associated with the engine. Setting this may help improve LLM related features.", "type": "string" } }, @@ -11314,6 +12560,24 @@ }, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaListControlsResponse": { +"description": "Response for ListControls method.", +"id": "GoogleCloudDiscoveryengineV1betaListControlsResponse", +"properties": { +"controls": { +"description": "All the Controls for a given data store.", +"items": { +"$ref": "GoogleCloudDiscoveryengineV1betaControl" +}, +"type": "array" +}, +"nextPageToken": { +"description": "Pagination token, if not returned indicates the last page.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaListConversationsResponse": { "description": "Response for ListConversations method.", "id": "GoogleCloudDiscoveryengineV1betaListConversationsResponse", @@ -11548,6 +12812,100 @@ "properties": {}, "type": "object" }, +"GoogleCloudDiscoveryengineV1betaProject": { +"description": "Metadata and configurations for a Google Cloud project in the service.", +"id": "GoogleCloudDiscoveryengineV1betaProject", +"properties": { +"createTime": { +"description": "Output only. The timestamp when this project is created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"name": { +"description": "Output only. Full resource name of the project, for example `projects/{project_number}`. Note that when making requests, project number and project id are both acceptable, but the server will always respond in project number.", +"readOnly": true, +"type": "string" +}, +"provisionCompletionTime": { +"description": "Output only. The timestamp when this project is successfully provisioned. Empty value means this project is still provisioning and is not ready for use.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"serviceTermsMap": { +"additionalProperties": { +"$ref": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms" +}, +"description": "Output only. A map of terms of services. The key is the `id` of ServiceTerms.", +"readOnly": true, +"type": "object" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProjectServiceTerms": { +"description": "Metadata about the terms of service.", +"id": "GoogleCloudDiscoveryengineV1betaProjectServiceTerms", +"properties": { +"acceptTime": { +"description": "The last time when the project agreed to the terms of service.", +"format": "google-datetime", +"type": "string" +}, +"declineTime": { +"description": "The last time when the project declined or revoked the agreement to terms of service.", +"format": "google-datetime", +"type": "string" +}, +"id": { +"description": "The unique identifier of this terms of service. Available terms: * `GA_DATA_USE_TERMS`: [Terms for data use](https://cloud.google.com/retail/data-use-terms). When using this as `id`, the acceptable version to provide is `2022-11-23`.", +"type": "string" +}, +"state": { +"description": "Whether the project has accepted/rejected the service terms or it is still pending.", +"enum": [ +"STATE_UNSPECIFIED", +"TERMS_ACCEPTED", +"TERMS_PENDING", +"TERMS_DECLINED" +], +"enumDescriptions": [ +"The default value of the enum. This value is not actually used.", +"The project has given consent to the terms of service.", +"The project is pending to review and accept the terms of service.", +"The project has declined or revoked the agreement to terms of service." +], +"type": "string" +}, +"version": { +"description": "The version string of the terms of service. For acceptable values, see the comments for id above.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata": { +"description": "Metadata associated with a project provision operation.", +"id": "GoogleCloudDiscoveryengineV1betaProvisionProjectMetadata", +"properties": {}, +"type": "object" +}, +"GoogleCloudDiscoveryengineV1betaProvisionProjectRequest": { +"description": "Request for ProjectService.ProvisionProject method.", +"id": "GoogleCloudDiscoveryengineV1betaProvisionProjectRequest", +"properties": { +"acceptDataUseTerms": { +"description": "Required. Set to `true` to specify that caller has read and would like to give consent to the [Terms for data use](https://cloud.google.com/retail/data-use-terms).", +"type": "boolean" +}, +"dataUseTermsVersion": { +"description": "Required. The version of the [Terms for data use](https://cloud.google.com/retail/data-use-terms) that caller has read and would like to give consent to. Acceptable version is `2022-11-23`, and this may change over time.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata": { "description": "Metadata related to the progress of the PurgeDocuments operation. This will be returned by the google.longrunning.Operation.metadata field.", "id": "GoogleCloudDiscoveryengineV1betaPurgeDocumentsMetadata", @@ -11698,6 +13056,13 @@ "description": "The number of results to return. If this is unset or no bigger than zero, returns all results.", "format": "int32", "type": "integer" +}, +"userLabels": { +"additionalProperties": { +"type": "string" +}, +"description": "The user labels applied to a resource must meet the following requirements: * Each resource can have multiple labels, up to a maximum of 64. * Each label must be a key-value pair. * Keys have a minimum length of 1 character and a maximum length of 63 characters and cannot be empty. Values can be empty and have a maximum length of 63 characters. * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed. * The key portion of a label must be unique. However, you can use the same key with multiple resources. * Keys must start with a lowercase letter or international character. See [Google Cloud Document](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more details.", +"type": "object" } }, "type": "object" @@ -13468,6 +14833,10 @@ "$ref": "GoogleCloudDiscoveryengineV1betaCompletionInfo", "description": "CompletionService.CompleteQuery details related to the event. This field should be set for `search` event when autocomplete function is enabled and the user clicks a suggestion for search." }, +"dataStore": { +"description": "The DataStore resource full name, of the form `projects/{project}/locations/{location}/collections/{collection_id}/dataStores/{data_store_id}`. Optional. Only required for user events whose data store can't by determined by UserEvent.engine or UserEvent.documents. If data store is set in the parent of write/import/collect user event requests, this field can be omitted.", +"type": "string" +}, "directUserRequest": { "description": "Should set to true if the request is made directly from the end user, in which case the UserEvent.user_info.user_agent can be populated from the HTTP request. This flag should be set only if the API request is made directly from the end user such as a mobile app (and not if a gateway or a server is processing and pushing the user events). This should not be set when using the JavaScript tag in UserEventService.CollectUserEvent.", "type": "boolean" @@ -13479,6 +14848,10 @@ }, "type": "array" }, +"engine": { +"description": "The Engine resource name, in the form of `projects/{project}/locations/{location}/collections/{collection_id}/engines/{engine_id}`. Optional. Only required for Engine produced user events. For example, user events from blended search.", +"type": "string" +}, "eventTime": { "description": "Only required for UserEventService.ImportUserEvents method. Timestamp of when the user event happened.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/displayvideo.v2.json b/googleapiclient/discovery_cache/documents/displayvideo.v2.json index a0226823381..1fae481b419 100644 --- a/googleapiclient/discovery_cache/documents/displayvideo.v2.json +++ b/googleapiclient/discovery_cache/documents/displayvideo.v2.json @@ -9267,7 +9267,7 @@ } } }, -"revision": "20240514", +"revision": "20240530", "rootUrl": "https://displayvideo.googleapis.com/", "schemas": { "ActivateManualTriggerRequest": { @@ -12331,7 +12331,8 @@ "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -12346,6 +12347,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -12361,7 +12363,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } @@ -19757,7 +19760,7 @@ false "type": "array" }, "publisherReviewStatuses": { -"description": "Publisher review statuses for the creative.", +"description": "Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information.", "items": { "$ref": "PublisherReviewStatus" }, @@ -19826,7 +19829,8 @@ false "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -19841,6 +19845,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -19856,7 +19861,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } @@ -19903,7 +19909,8 @@ false "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -19918,6 +19925,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -19933,7 +19941,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/displayvideo.v3.json b/googleapiclient/discovery_cache/documents/displayvideo.v3.json index 720d636b191..a934d154a32 100644 --- a/googleapiclient/discovery_cache/documents/displayvideo.v3.json +++ b/googleapiclient/discovery_cache/documents/displayvideo.v3.json @@ -9222,7 +9222,7 @@ } } }, -"revision": "20240514", +"revision": "20240530", "rootUrl": "https://displayvideo.googleapis.com/", "schemas": { "ActiveViewVideoViewabilityMetricConfig": { @@ -12904,7 +12904,8 @@ "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -12919,6 +12920,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -12934,7 +12936,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } @@ -20486,7 +20489,7 @@ false "type": "array" }, "publisherReviewStatuses": { -"description": "Publisher review statuses for the creative.", +"description": "Publisher review statuses for the creative. **Warning:** This field will be deprecated on June 26th, 2024. After this date, this field will be empty. Read our [feature deprecation announcement](/display-video/api/deprecations#features.creative_publisher_review_statuses) for more information.", "items": { "$ref": "PublisherReviewStatus" }, @@ -20555,7 +20558,8 @@ false "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -20570,6 +20574,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -20585,7 +20590,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } @@ -20632,7 +20638,8 @@ false "SDF_VERSION_5_4", "SDF_VERSION_5_5", "SDF_VERSION_6", -"SDF_VERSION_7" +"SDF_VERSION_7", +"SDF_VERSION_7_1" ], "enumDeprecated": [ false, @@ -20647,6 +20654,7 @@ true, true, false, false, +false, false ], "enumDescriptions": [ @@ -20662,7 +20670,8 @@ false "SDF version 5.4", "SDF version 5.5", "SDF version 6", -"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." +"SDF version 7. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version.", +"SDF version 7.1. Read the [v7 migration guide](/display-video/api/structured-data-file/v7-migration-guide) before migrating to this version." ], "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/dlp.v2.json b/googleapiclient/discovery_cache/documents/dlp.v2.json index bf4ff9da262..54bc8743e43 100644 --- a/googleapiclient/discovery_cache/documents/dlp.v2.json +++ b/googleapiclient/discovery_cache/documents/dlp.v2.json @@ -4451,7 +4451,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://dlp.googleapis.com/", "schemas": { "GooglePrivacyDlpV2Action": { @@ -7578,6 +7578,7 @@ "GLOBAL", "ARGENTINA", "AUSTRALIA", +"AZERBAIJAN", "BELGIUM", "BRAZIL", "CANADA", @@ -7628,6 +7629,7 @@ "The infoType is not issued by or tied to a specific region, but is used almost everywhere.", "The infoType is typically used in Argentina.", "The infoType is typically used in Australia.", +"The infoType is typically used in Azerbaijan.", "The infoType is typically used in Belgium.", "The infoType is typically used in Brazil.", "The infoType is typically used in Canada.", diff --git a/googleapiclient/discovery_cache/documents/dns.v1.json b/googleapiclient/discovery_cache/documents/dns.v1.json index 16429c1b51e..f321c425602 100644 --- a/googleapiclient/discovery_cache/documents/dns.v1.json +++ b/googleapiclient/discovery_cache/documents/dns.v1.json @@ -1824,7 +1824,7 @@ } } }, -"revision": "20240521", +"revision": "20240524", "rootUrl": "https://dns.googleapis.com/", "schemas": { "Change": { diff --git a/googleapiclient/discovery_cache/documents/dns.v1beta2.json b/googleapiclient/discovery_cache/documents/dns.v1beta2.json index 0ea62992084..044828410fc 100644 --- a/googleapiclient/discovery_cache/documents/dns.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/dns.v1beta2.json @@ -1821,7 +1821,7 @@ } } }, -"revision": "20240521", +"revision": "20240524", "rootUrl": "https://dns.googleapis.com/", "schemas": { "Change": { diff --git a/googleapiclient/discovery_cache/documents/docs.v1.json b/googleapiclient/discovery_cache/documents/docs.v1.json index f8d0490089c..3da4ec778b6 100644 --- a/googleapiclient/discovery_cache/documents/docs.v1.json +++ b/googleapiclient/discovery_cache/documents/docs.v1.json @@ -216,7 +216,7 @@ } } }, -"revision": "20240514", +"revision": "20240603", "rootUrl": "https://docs.googleapis.com/", "schemas": { "AutoText": { diff --git a/googleapiclient/discovery_cache/documents/documentai.v1.json b/googleapiclient/discovery_cache/documents/documentai.v1.json index 74edffc822e..3ec97eb84a1 100644 --- a/googleapiclient/discovery_cache/documents/documentai.v1.json +++ b/googleapiclient/discovery_cache/documents/documentai.v1.json @@ -1042,7 +1042,7 @@ } } }, -"revision": "20240523", +"revision": "20240531", "rootUrl": "https://documentai.googleapis.com/", "schemas": { "GoogleCloudDocumentaiUiv1beta3AutoLabelDocumentsMetadata": { @@ -2217,11 +2217,19 @@ "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -2288,6 +2296,282 @@ }, "type": "object" }, +"GoogleCloudDocumentaiV1DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1DocumentEntity", @@ -4117,6 +4401,10 @@ true "$ref": "GoogleCloudDocumentaiV1ProcessOptionsIndividualPageSelector", "description": "Which pages to process (1-indexed)." }, +"layoutConfig": { +"$ref": "GoogleCloudDocumentaiV1ProcessOptionsLayoutConfig", +"description": "Optional. Only applicable to `LAYOUT_PARSER_PROCESSOR`. Returns error if set on other processor types." +}, "ocrConfig": { "$ref": "GoogleCloudDocumentaiV1OcrConfig", "description": "Only applicable to `OCR_PROCESSOR` and `FORM_PARSER_PROCESSOR`. Returns error if set on other processor types." @@ -4143,6 +4431,33 @@ true }, "type": "object" }, +"GoogleCloudDocumentaiV1ProcessOptionsLayoutConfig": { +"description": "Serving config for layout parser processor.", +"id": "GoogleCloudDocumentaiV1ProcessOptionsLayoutConfig", +"properties": { +"chunkingConfig": { +"$ref": "GoogleCloudDocumentaiV1ProcessOptionsLayoutConfigChunkingConfig", +"description": "Optional. Config for chunking in layout parser processor." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1ProcessOptionsLayoutConfigChunkingConfig": { +"description": "Serving config for chunking.", +"id": "GoogleCloudDocumentaiV1ProcessOptionsLayoutConfigChunkingConfig", +"properties": { +"chunkSize": { +"description": "Optional. The chunk sizes to use when splitting documents, in order of level.", +"format": "int32", +"type": "integer" +}, +"includeAncestorHeadings": { +"description": "Optional. Whether or not to include ancestor headings when splitting.", +"type": "boolean" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1ProcessRequest": { "description": "Request message for the ProcessDocument method.", "id": "GoogleCloudDocumentaiV1ProcessRequest", @@ -4236,6 +4551,16 @@ true "readOnly": true, "type": "array" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "state": { "description": "Output only. The state of the processor.", "enum": [ @@ -4397,6 +4722,16 @@ true "description": "Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}`", "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "state": { "description": "Output only. The state of the processor version.", "enum": [ @@ -4810,86 +5145,370 @@ true }, "type": "array" }, -"vertices": { -"description": "The bounding polygon vertices.", +"vertices": { +"description": "The bounding polygon vertices.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1Vertex" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1Document": { +"description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", +"id": "GoogleCloudDocumentaiV1beta1Document", +"properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, +"content": { +"description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", +"format": "byte", +"type": "string" +}, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, +"entities": { +"description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentEntity" +}, +"type": "array" +}, +"entityRelations": { +"description": "Placeholder. Relationship among Document.entities.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentEntityRelation" +}, +"type": "array" +}, +"error": { +"$ref": "GoogleRpcStatus", +"description": "Any error that occurred while processing this document." +}, +"mimeType": { +"description": "An IANA published [media type (MIME type)](https://www.iana.org/assignments/media-types/media-types.xhtml).", +"type": "string" +}, +"pages": { +"description": "Visual page layout for the Document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentPage" +}, +"type": "array" +}, +"revisions": { +"description": "Placeholder. Revision history of this document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentRevision" +}, +"type": "array" +}, +"shardInfo": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentShardInfo", +"description": "Information about the sharding if this document is sharded part of a larger document. If the document is not sharded, this message is not specified." +}, +"text": { +"description": "Optional. UTF-8 encoded text in reading order from the document.", +"type": "string" +}, +"textChanges": { +"description": "Placeholder. A list of text corrections made to Document.text. This is usually used for annotating corrections to OCR mistakes. Text changes for a given revision may not overlap with each other.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentTextChange" +}, +"type": "array" +}, +"textStyles": { +"deprecated": true, +"description": "Styles for the Document.text.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentStyle" +}, +"type": "array" +}, +"uri": { +"description": "Optional. Currently supports Google Cloud Storage URI of the form `gs://bucket_name/object_name`. Object versioning is not supported. For more information, refer to [Google Cloud Storage Request URIs](https://cloud.google.com/storage/docs/reference-uris).", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1Vertex" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" }, "type": "array" } }, "type": "object" }, -"GoogleCloudDocumentaiV1beta1Document": { -"description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", -"id": "GoogleCloudDocumentaiV1beta1Document", +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", "properties": { -"content": { -"description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", -"format": "byte", -"type": "string" +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" }, -"entities": { -"description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", -"items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentEntity" +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} }, -"type": "array" +"type": "object" }, -"entityRelations": { -"description": "Placeholder. Relationship among Document.entities.", +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentEntityRelation" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" }, "type": "array" }, -"error": { -"$ref": "GoogleRpcStatus", -"description": "Any error that occurred while processing this document." -}, -"mimeType": { -"description": "An IANA published [media type (MIME type)](https://www.iana.org/assignments/media-types/media-types.xhtml).", +"caption": { +"description": "Table caption/title.", "type": "string" }, -"pages": { -"description": "Visual page layout for the Document.", +"headerRows": { +"description": "Header rows at the top of the table.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentPage" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" }, "type": "array" +} }, -"revisions": { -"description": "Placeholder. Revision history of this document.", +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentRevision" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" }, "type": "array" }, -"shardInfo": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentShardInfo", -"description": "Information about the sharding if this document is sharded part of a larger document. If the document is not sharded, this message is not specified." +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" }, -"text": { -"description": "Optional. UTF-8 encoded text in reading order from the document.", -"type": "string" +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} }, -"textChanges": { -"description": "Placeholder. A list of text corrections made to Document.text. This is usually used for annotating corrections to OCR mistakes. Text changes for a given revision may not overlap with each other.", +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentTextChange" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" }, "type": "array" +} }, -"textStyles": { -"deprecated": true, -"description": "Styles for the Document.text.", +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", "items": { -"$ref": "GoogleCloudDocumentaiV1beta1DocumentStyle" +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" }, "type": "array" }, -"uri": { -"description": "Optional. Currently supports Google Cloud Storage URI of the form `gs://bucket_name/object_name`. Object versioning is not supported. For more information, refer to [Google Cloud Storage Request URIs](https://cloud.google.com/storage/docs/reference-uris).", +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", "type": "string" } }, @@ -6198,11 +6817,19 @@ true "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1beta2Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -6276,6 +6903,282 @@ true }, "type": "object" }, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1beta2DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1beta2DocumentEntity", @@ -7740,6 +8643,16 @@ true "description": "Dataset resource name. Format: `projects/{project}/locations/{location}/processors/{processor}/dataset`", "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "spannerIndexingConfig": { "$ref": "GoogleCloudDocumentaiV1beta3DatasetSpannerIndexingConfig", "description": "Optional. A lightweight indexing source with low latency and high reliability, but lacking advanced features like CMEK and content-based search." diff --git a/googleapiclient/discovery_cache/documents/documentai.v1beta2.json b/googleapiclient/discovery_cache/documents/documentai.v1beta2.json index a10e31dc15e..9ff6999aaf6 100644 --- a/googleapiclient/discovery_cache/documents/documentai.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/documentai.v1beta2.json @@ -292,7 +292,7 @@ } } }, -"revision": "20240523", +"revision": "20240531", "rootUrl": "https://documentai.googleapis.com/", "schemas": { "GoogleCloudDocumentaiUiv1beta3AutoLabelDocumentsMetadata": { @@ -1635,11 +1635,19 @@ "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1beta1Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -1706,6 +1714,282 @@ }, "type": "object" }, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1beta1DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1beta1DocumentEntity", @@ -3034,11 +3318,19 @@ true "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1beta2Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -3112,6 +3404,282 @@ true }, "type": "object" }, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1beta2DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1beta2DocumentEntity", @@ -4733,6 +5301,16 @@ true "description": "Dataset resource name. Format: `projects/{project}/locations/{location}/processors/{processor}/dataset`", "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "spannerIndexingConfig": { "$ref": "GoogleCloudDocumentaiV1beta3DatasetSpannerIndexingConfig", "description": "Optional. A lightweight indexing source with low latency and high reliability, but lacking advanced features like CMEK and content-based search." diff --git a/googleapiclient/discovery_cache/documents/documentai.v1beta3.json b/googleapiclient/discovery_cache/documents/documentai.v1beta3.json index d5a0f659394..ae6c20c56e3 100644 --- a/googleapiclient/discovery_cache/documents/documentai.v1beta3.json +++ b/googleapiclient/discovery_cache/documents/documentai.v1beta3.json @@ -1284,7 +1284,7 @@ } } }, -"revision": "20240523", +"revision": "20240531", "rootUrl": "https://documentai.googleapis.com/", "schemas": { "GoogleCloudDocumentaiUiv1beta3AutoLabelDocumentsMetadata": { @@ -2627,11 +2627,19 @@ "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1beta1Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -2698,6 +2706,282 @@ }, "type": "object" }, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta1DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1beta1DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1beta1DocumentEntity", @@ -4001,11 +4285,19 @@ true "description": "Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.", "id": "GoogleCloudDocumentaiV1beta2Document", "properties": { +"chunkedDocument": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"description": "Document chunked based on chunking config." +}, "content": { "description": "Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.", "format": "byte", "type": "string" }, +"documentLayout": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"description": "Parsed layout of the document." +}, "entities": { "description": "A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.", "items": { @@ -4079,6 +4371,282 @@ true }, "type": "object" }, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocument": { +"description": "Represents the chunks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocument", +"properties": { +"chunks": { +"description": "List of chunks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk": { +"description": "Represents a chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunk", +"properties": { +"chunkId": { +"description": "ID of the chunk.", +"type": "string" +}, +"content": { +"description": "Text content of the chunk.", +"type": "string" +}, +"pageFooters": { +"description": "Page footers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter" +}, +"type": "array" +}, +"pageHeaders": { +"description": "Page headers associated with the chunk.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader" +}, +"type": "array" +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the chunk." +}, +"sourceBlockIds": { +"description": "Unused.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter": { +"description": "Represents the page footer associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageFooter", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the footer." +}, +"text": { +"description": "Footer in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader": { +"description": "Represents the page header associated with the chunk.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageHeader", +"properties": { +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"description": "Page span of the header." +}, +"text": { +"description": "Header in text format.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan": { +"description": "Represents where the chunk starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentChunkedDocumentChunkChunkPageSpan", +"properties": { +"pageEnd": { +"description": "Page where chunk ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where chunk starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayout": { +"description": "Represents the parsed layout of a document as a collection of blocks that the document is divided into.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayout", +"properties": { +"blocks": { +"description": "List of blocks in the document.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock": { +"description": "Represents a block. A block could be one of the various types (text, table, list) supported.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock", +"properties": { +"blockId": { +"description": "ID of the block.", +"type": "string" +}, +"listBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"description": "Block consisting of list content/structure." +}, +"pageSpan": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"description": "Page span of the block." +}, +"tableBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"description": "Block consisting of table content/structure." +}, +"textBlock": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"description": "Block consisting of text content." +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock": { +"description": "Represents a list type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListBlock", +"properties": { +"listEntries": { +"description": "List entries that constitute a list block.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry" +}, +"type": "array" +}, +"type": { +"description": "Type of the list_entries (if exist). Available options are `ordered` and `unordered`.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry": { +"description": "Represents an entry in the list.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutListEntry", +"properties": { +"blocks": { +"description": "A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan": { +"description": "Represents where the block starts and ends in the document.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutPageSpan", +"properties": { +"pageEnd": { +"description": "Page where block ends in the document.", +"format": "int32", +"type": "integer" +}, +"pageStart": { +"description": "Page where block starts in the document.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock": { +"description": "Represents a table type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableBlock", +"properties": { +"bodyRows": { +"description": "Body rows containing main table content.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +}, +"caption": { +"description": "Table caption/title.", +"type": "string" +}, +"headerRows": { +"description": "Header rows at the top of the table.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell": { +"description": "Represents a cell in a table row.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell", +"properties": { +"blocks": { +"description": "A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"colSpan": { +"description": "How many columns this cell spans.", +"format": "int32", +"type": "integer" +}, +"rowSpan": { +"description": "How many rows this cell spans.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow": { +"description": "Represents a row in a table.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableRow", +"properties": { +"cells": { +"description": "A table row is a list of table cells.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTableCell" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock": { +"description": "Represents a text type block.", +"id": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlockLayoutTextBlock", +"properties": { +"blocks": { +"description": "A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.", +"items": { +"$ref": "GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock" +}, +"type": "array" +}, +"text": { +"description": "Text content stored in the block.", +"type": "string" +}, +"type": { +"description": "Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudDocumentaiV1beta2DocumentEntity": { "description": "An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.", "id": "GoogleCloudDocumentaiV1beta2DocumentEntity", @@ -5708,6 +6276,16 @@ true "description": "Dataset resource name. Format: `projects/{project}/locations/{location}/processors/{processor}/dataset`", "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "spannerIndexingConfig": { "$ref": "GoogleCloudDocumentaiV1beta3DatasetSpannerIndexingConfig", "description": "Optional. A lightweight indexing source with low latency and high reliability, but lacking advanced features like CMEK and content-based search." @@ -5986,7 +6564,7 @@ true "description": "Page span of the chunk." }, "sourceBlockIds": { -"description": "DO NOT USE. List of all parsed documents layout source blocks used to generate the chunk.", +"description": "Unused.", "items": { "type": "string" }, @@ -8580,6 +9158,16 @@ true "readOnly": true, "type": "array" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "state": { "description": "Output only. The state of the processor.", "enum": [ @@ -8741,6 +9329,16 @@ true "description": "Identifier. The resource name of the processor version. Format: `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processor_version}`", "type": "string" }, +"satisfiesPzi": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, +"satisfiesPzs": { +"description": "Output only. Reserved for future use.", +"readOnly": true, +"type": "boolean" +}, "state": { "description": "Output only. The state of the processor version.", "enum": [ diff --git a/googleapiclient/discovery_cache/documents/domainsrdap.v1.json b/googleapiclient/discovery_cache/documents/domainsrdap.v1.json index c4a183a908d..acdfd0d80ad 100644 --- a/googleapiclient/discovery_cache/documents/domainsrdap.v1.json +++ b/googleapiclient/discovery_cache/documents/domainsrdap.v1.json @@ -289,7 +289,7 @@ } } }, -"revision": "20240522", +"revision": "20240603", "rootUrl": "https://domainsrdap.googleapis.com/", "schemas": { "HttpBody": { diff --git a/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json b/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json index 1ed26752844..3efe53e22c2 100644 --- a/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json +++ b/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json @@ -319,7 +319,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://doubleclickbidmanager.googleapis.com/", "schemas": { "DataRange": { diff --git a/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json b/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json index 9868be7073a..039fd7fd657 100644 --- a/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json +++ b/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json @@ -543,7 +543,7 @@ } } }, -"revision": "20240507", +"revision": "20240530", "rootUrl": "https://doubleclicksearch.googleapis.com/", "schemas": { "Availability": { diff --git a/googleapiclient/discovery_cache/documents/drive.v2.json b/googleapiclient/discovery_cache/documents/drive.v2.json index f5b9e75215b..3e7fafba568 100644 --- a/googleapiclient/discovery_cache/documents/drive.v2.json +++ b/googleapiclient/discovery_cache/documents/drive.v2.json @@ -3869,7 +3869,7 @@ } } }, -"revision": "20240521", +"revision": "20240522", "rootUrl": "https://www.googleapis.com/", "schemas": { "About": { diff --git a/googleapiclient/discovery_cache/documents/drive.v3.json b/googleapiclient/discovery_cache/documents/drive.v3.json index a17e25ed5c8..6b731d28260 100644 --- a/googleapiclient/discovery_cache/documents/drive.v3.json +++ b/googleapiclient/discovery_cache/documents/drive.v3.json @@ -2523,7 +2523,7 @@ } } }, -"revision": "20240521", +"revision": "20240522", "rootUrl": "https://www.googleapis.com/", "schemas": { "About": { diff --git a/googleapiclient/discovery_cache/documents/driveactivity.v2.json b/googleapiclient/discovery_cache/documents/driveactivity.v2.json index a78812580c7..210d7a1d071 100644 --- a/googleapiclient/discovery_cache/documents/driveactivity.v2.json +++ b/googleapiclient/discovery_cache/documents/driveactivity.v2.json @@ -132,7 +132,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://driveactivity.googleapis.com/", "schemas": { "Action": { diff --git a/googleapiclient/discovery_cache/documents/drivelabels.v2.json b/googleapiclient/discovery_cache/documents/drivelabels.v2.json index 2d0d3c2e60b..1644b8946b6 100644 --- a/googleapiclient/discovery_cache/documents/drivelabels.v2.json +++ b/googleapiclient/discovery_cache/documents/drivelabels.v2.json @@ -1032,7 +1032,7 @@ } } }, -"revision": "20240522", +"revision": "20240528", "rootUrl": "https://drivelabels.googleapis.com/", "schemas": { "GoogleAppsDriveLabelsV2BadgeColors": { diff --git a/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json b/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json index 60bbfd05cbb..5983772e86a 100644 --- a/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json +++ b/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json @@ -1032,7 +1032,7 @@ } } }, -"revision": "20240522", +"revision": "20240528", "rootUrl": "https://drivelabels.googleapis.com/", "schemas": { "GoogleAppsDriveLabelsV2betaBadgeColors": { diff --git a/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json b/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json index 80172efd135..cbc4caaf7ce 100644 --- a/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json +++ b/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json @@ -850,7 +850,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://essentialcontacts.googleapis.com/", "schemas": { "GoogleCloudEssentialcontactsV1ComputeContactsResponse": { diff --git a/googleapiclient/discovery_cache/documents/eventarc.v1.json b/googleapiclient/discovery_cache/documents/eventarc.v1.json index 1a6dc763a3d..44d3e0e8abe 100644 --- a/googleapiclient/discovery_cache/documents/eventarc.v1.json +++ b/googleapiclient/discovery_cache/documents/eventarc.v1.json @@ -1197,7 +1197,7 @@ } } }, -"revision": "20240510", +"revision": "20240524", "rootUrl": "https://eventarc.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json b/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json index d0476975a65..df5a1bca2dd 100644 --- a/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json @@ -344,7 +344,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://factchecktools.googleapis.com/", "schemas": { "GoogleFactcheckingFactchecktoolsV1alpha1Claim": { diff --git a/googleapiclient/discovery_cache/documents/fcm.v1.json b/googleapiclient/discovery_cache/documents/fcm.v1.json index 9df067b7d96..1007f93953f 100644 --- a/googleapiclient/discovery_cache/documents/fcm.v1.json +++ b/googleapiclient/discovery_cache/documents/fcm.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240524", +"revision": "20240528", "rootUrl": "https://fcm.googleapis.com/", "schemas": { "AndroidConfig": { diff --git a/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json b/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json index 4246ad7f0bf..6304622802b 100644 --- a/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json @@ -154,7 +154,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://fcmdata.googleapis.com/", "schemas": { "GoogleFirebaseFcmDataV1beta1AndroidDeliveryData": { @@ -279,6 +279,11 @@ "description": "Percentage breakdown of message delivery outcomes. These categories are mutually exclusive. All percentages are calculated with countMessagesAccepted as the denominator. These categories may not account for all message outcomes.", "id": "GoogleFirebaseFcmDataV1beta1MessageOutcomePercents", "properties": { +"collapsed": { +"description": "The percentage of accepted messages that were [collapsed](https://firebase.google.com/docs/cloud-messaging/concept-options#collapsible_and_non-collapsible_messages) by another message.", +"format": "float", +"type": "number" +}, "delivered": { "description": "The percentage of all accepted messages that were successfully delivered to the device.", "format": "float", @@ -299,6 +304,11 @@ "format": "float", "type": "number" }, +"droppedTtlExpired": { +"description": "The percentage of accepted messages that expired because [Time To Live (TTL)](https://firebase.google.com/docs/cloud-messaging/concept-options#ttl) elapsed before the target device reconnected.", +"format": "float", +"type": "number" +}, "pending": { "description": "The percentage of messages accepted on this day that were not dropped and not delivered, due to the device being disconnected (as of the end of the America/Los_Angeles day when the message was sent to FCM). A portion of these messages will be delivered the next day when the device connects but others may be destined to devices that ultimately never reconnect.", "format": "float", diff --git a/googleapiclient/discovery_cache/documents/file.v1.json b/googleapiclient/discovery_cache/documents/file.v1.json index 7eabd7da6e1..04b9ad2b29e 100644 --- a/googleapiclient/discovery_cache/documents/file.v1.json +++ b/googleapiclient/discovery_cache/documents/file.v1.json @@ -874,7 +874,7 @@ } } }, -"revision": "20240511", +"revision": "20240523", "rootUrl": "https://file.googleapis.com/", "schemas": { "Backup": { diff --git a/googleapiclient/discovery_cache/documents/file.v1beta1.json b/googleapiclient/discovery_cache/documents/file.v1beta1.json index 274fa0ed27f..fbb8fd86274 100644 --- a/googleapiclient/discovery_cache/documents/file.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/file.v1beta1.json @@ -1069,7 +1069,7 @@ } } }, -"revision": "20240511", +"revision": "20240523", "rootUrl": "https://file.googleapis.com/", "schemas": { "Backup": { diff --git a/googleapiclient/discovery_cache/documents/firebase.v1beta1.json b/googleapiclient/discovery_cache/documents/firebase.v1beta1.json index b94b6db6ebd..e1c307e67a2 100644 --- a/googleapiclient/discovery_cache/documents/firebase.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/firebase.v1beta1.json @@ -1324,7 +1324,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://firebase.googleapis.com/", "schemas": { "AddFirebaseRequest": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json index 93aebdf0fce..eab376e106c 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json @@ -138,6 +138,126 @@ } } }, +"oauthClients": { +"methods": { +"exchangeAppAttestAssertion": { +"description": "Accepts an App Attest assertion and an artifact previously obtained from ExchangeAppAttestAttestation and verifies those with Apple. If valid, returns an AppCheckToken.", +"flatPath": "v1/oauthClients/{oauthClientsId}:exchangeAppAttestAssertion", +"httpMethod": "POST", +"id": "firebaseappcheck.oauthClients.exchangeAppAttestAssertion", +"parameterOrder": [ +"app" +], +"parameters": { +"app": { +"description": "Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard.", +"location": "path", +"pattern": "^oauthClients/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+app}:exchangeAppAttestAssertion", +"request": { +"$ref": "GoogleFirebaseAppcheckV1ExchangeAppAttestAssertionRequest" +}, +"response": { +"$ref": "GoogleFirebaseAppcheckV1AppCheckToken" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/firebase" +] +}, +"exchangeAppAttestAttestation": { +"description": "Accepts an App Attest CBOR attestation and verifies it with Apple using your preconfigured team and bundle IDs. If valid, returns an attestation artifact that can later be exchanged for an AppCheckToken using ExchangeAppAttestAssertion. For convenience and performance, this method's response object will also contain an AppCheckToken (if the verification is successful).", +"flatPath": "v1/oauthClients/{oauthClientsId}:exchangeAppAttestAttestation", +"httpMethod": "POST", +"id": "firebaseappcheck.oauthClients.exchangeAppAttestAttestation", +"parameterOrder": [ +"app" +], +"parameters": { +"app": { +"description": "Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard.", +"location": "path", +"pattern": "^oauthClients/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+app}:exchangeAppAttestAttestation", +"request": { +"$ref": "GoogleFirebaseAppcheckV1ExchangeAppAttestAttestationRequest" +}, +"response": { +"$ref": "GoogleFirebaseAppcheckV1ExchangeAppAttestAttestationResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/firebase" +] +}, +"exchangeDebugToken": { +"description": "Validates a debug token secret that you have previously created using CreateDebugToken. If valid, returns an AppCheckToken. Note that a restrictive quota is enforced on this method to prevent accidental exposure of the app to abuse.", +"flatPath": "v1/oauthClients/{oauthClientsId}:exchangeDebugToken", +"httpMethod": "POST", +"id": "firebaseappcheck.oauthClients.exchangeDebugToken", +"parameterOrder": [ +"app" +], +"parameters": { +"app": { +"description": "Required. The relative resource name of the app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard.", +"location": "path", +"pattern": "^oauthClients/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+app}:exchangeDebugToken", +"request": { +"$ref": "GoogleFirebaseAppcheckV1ExchangeDebugTokenRequest" +}, +"response": { +"$ref": "GoogleFirebaseAppcheckV1AppCheckToken" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/firebase" +] +}, +"generateAppAttestChallenge": { +"description": "Generates a challenge that protects the integrity of an immediately following call to ExchangeAppAttestAttestation or ExchangeAppAttestAssertion. A challenge should not be reused for multiple calls.", +"flatPath": "v1/oauthClients/{oauthClientsId}:generateAppAttestChallenge", +"httpMethod": "POST", +"id": "firebaseappcheck.oauthClients.generateAppAttestChallenge", +"parameterOrder": [ +"app" +], +"parameters": { +"app": { +"description": "Required. The relative resource name of the iOS app, in the format: ``` projects/{project_number}/apps/{app_id} ``` If necessary, the `project_number` element can be replaced with the project ID of the Firebase project. Learn more about using project identifiers in Google's [AIP 2510](https://google.aip.dev/cloud/2510) standard.", +"location": "path", +"pattern": "^oauthClients/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+app}:generateAppAttestChallenge", +"request": { +"$ref": "GoogleFirebaseAppcheckV1GenerateAppAttestChallengeRequest" +}, +"response": { +"$ref": "GoogleFirebaseAppcheckV1GenerateAppAttestChallengeResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/firebase" +] +} +} +}, "projects": { "resources": { "apps": { @@ -1070,7 +1190,7 @@ ] }, "patch": { -"description": "Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.", +"description": "Updates the RecaptchaV3Config for the specified app. While this configuration is incomplete or invalid, the app will be unable to exchange reCAPTCHA V3 tokens for App Check tokens. For security reasons, the `site_secret` field is never populated in the response.", "flatPath": "v1/projects/{projectsId}/apps/{appsId}/recaptchaV3Config", "httpMethod": "PATCH", "id": "firebaseappcheck.projects.apps.recaptchaV3Config.patch", @@ -1343,7 +1463,7 @@ } } }, -"revision": "20240506", +"revision": "20240603", "rootUrl": "https://firebaseappcheck.googleapis.com/", "schemas": { "GoogleFirebaseAppcheckV1AppAttestConfig": { @@ -1938,7 +2058,7 @@ "enumDescriptions": [ "Firebase App Check is not enforced for the service, nor are App Check metrics collected. Though the service is not protected by App Check in this mode, other applicable protections, such as user authorization, are still enforced. An unconfigured service is in this mode by default.", "Firebase App Check is not enforced for the service. App Check metrics are collected to help you decide when to turn on enforcement for the service. Though the service is not protected by App Check in this mode, other applicable protections, such as user authorization, are still enforced. Some services require certain conditions to be met before they will work with App Check, such as requiring you to upgrade to a specific service tier. Until those requirements are met for a service, this `UNENFORCED` setting will have no effect and App Check will not work with that service.", -"Firebase App Check is enforced for the service. The service will reject any request that attempts to access your project's resources if it does not have valid App Check token attached, with some exceptions depending on the service; for example, some services will still allow requests bearing the developer's privileged service account credentials without an App Check token. App Check metrics continue to be collected to help you detect issues with your App Check integration and monitor the composition of your callers. While the service is protected by App Check, other applicable protections, such as user authorization, continue to be enforced at the same time. Use caution when choosing to enforce App Check on a Firebase service. If your users have not updated to an App Check capable version of your app, their apps will no longer be able to use your Firebase services that are enforcing App Check. App Check metrics can help you decide whether to enforce App Check on your Firebase services. If your app has not launched yet, you should enable enforcement immediately, since there are no outdated clients in use. Some services require certain conditions to be met before they will work with App Check, such as requiring you to upgrade to a specific service tier or requiring you to enable the service first. Until those requirements are met for a service, this `ENFORCED` setting will have no effect and App Check will not work with that service." +"Firebase App Check is enforced for the service. The service will reject any request that attempts to access your project's resources if it does not have valid App Check token attached, with some exceptions depending on the service; for example, some services will still allow requests bearing the developer's privileged service account credentials without an App Check token. App Check metrics continue to be collected to help you detect issues with your App Check integration and monitor the composition of your callers. While the service is protected by App Check, other applicable protections, such as user authorization, continue to be enforced at the same time. Use caution when choosing to enforce App Check on a Firebase service. If your users have not updated to an App Check capable version of your app, their apps will no longer be able to use your Firebase services that are enforcing App Check. App Check metrics can help you decide whether to enforce App Check on your Firebase services. If your app has not launched yet, you should enable enforcement immediately, since there are no outdated clients in use. Some services require certain conditions to be met before they will work with App Check, such as requiring you to upgrade to a specific service tier. Until those requirements are met for a service, this `ENFORCED` setting will have no effect and App Check will not work with that service." ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json index b9b0c946c0f..4a42b9f4ad4 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json +++ b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json @@ -1532,7 +1532,7 @@ ], "parameters": { "name": { -"description": "Required. The relative resource name of the Service to retrieve, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform)", +"description": "Required. The relative resource name of the Service to retrieve, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) * `oauth2.googleapis.com` (Google Identity for iOS)", "location": "path", "pattern": "^projects/[^/]+/services/[^/]+$", "required": true, @@ -1558,7 +1558,7 @@ ], "parameters": { "pageSize": { -"description": "The maximum number of Services to return in the response. Only explicitly configured services are returned. The server may return fewer than this at its own discretion. If no value is specified or set to zero (or too large a value is specified), the server will impose its own limit.", +"description": "The maximum number of Services to return in the response. Only explicitly configured services are returned. The server may return fewer than this at its own discretion. If no value is specified (or too large a value is specified), the server will impose its own limit.", "format": "int32", "location": "query", "type": "integer" @@ -1791,7 +1791,7 @@ ], "parameters": { "name": { -"description": "Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.", +"description": "Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.", "location": "path", "pattern": "^projects/[^/]+/services/[^/]+/resourcePolicies/[^/]+$", "required": true, @@ -1823,7 +1823,7 @@ } } }, -"revision": "20240506", +"revision": "20240603", "rootUrl": "https://firebaseappcheck.googleapis.com/", "schemas": { "GoogleFirebaseAppcheckV1betaAppAttestConfig": { @@ -2025,7 +2025,7 @@ "type": "array" }, "updateMask": { -"description": "Optional. A comma-separated list of names of fields in the Services to update. Example: `display_name`. If this field is present, the `update_mask` field in the UpdateServiceRequest messages must all match this field, or the entire batch fails and no updates will be committed.", +"description": "Optional. A comma-separated list of names of fields in the Services to update. Example: `display_name`. If the `update_mask` field is set in both this request and any of the UpdateServiceRequest messages, they must match or the entire batch fails and no updates will be committed.", "format": "google-fieldmask", "type": "string" } @@ -2541,7 +2541,7 @@ "type": "string" }, "name": { -"description": "Required. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.", +"description": "Required. Identifier. The relative name of the resource policy object, in the format: ``` projects/{project_number}/services/{service_id}/resourcePolicies/{resource_policy_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `oauth2.googleapis.com` (Google Identity for iOS) `resource_policy_id` is a system-generated UID.", "type": "string" }, "targetResource": { @@ -2631,7 +2631,7 @@ "properties": { "service": { "$ref": "GoogleFirebaseAppcheckV1betaService", -"description": "Required. The Service to update. The Service's `name` field is used to identify the Service to be updated, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) For Firebase Authentication to work with App Check, you must first upgrade to [Firebase Authentication with Identity Platform](https://firebase.google.com/docs/auth#identity-platform)." +"description": "Required. The Service to update. The Service's `name` field is used to identify the Service to be updated, in the format: ``` projects/{project_number}/services/{service_id} ``` Note that the `service_id` element must be a supported service ID. Currently, the following service IDs are supported: * `firebasestorage.googleapis.com` (Cloud Storage for Firebase) * `firebasedatabase.googleapis.com` (Firebase Realtime Database) * `firestore.googleapis.com` (Cloud Firestore) * `identitytoolkit.googleapis.com` (Firebase Authentication with Identity Platform) * `oauth2.googleapis.com` (Google Identity for iOS) For Firebase Authentication to work with App Check, you must first upgrade to [Firebase Authentication with Identity Platform](https://firebase.google.com/docs/auth#identity-platform)." }, "updateMask": { "description": "Required. A comma-separated list of names of fields in the Service to update. Example: `enforcement_mode`.", diff --git a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json index d743833166d..be4b8396d7e 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json @@ -941,7 +941,7 @@ } } }, -"revision": "20240524", +"revision": "20240603", "rootUrl": "https://firebaseappdistribution.googleapis.com/", "schemas": { "GdataBlobstore2Info": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json index b652b29fc62..9e9320b6aa5 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json @@ -585,7 +585,7 @@ } } }, -"revision": "20240524", +"revision": "20240603", "rootUrl": "https://firebaseappdistribution.googleapis.com/", "schemas": { "GoogleFirebaseAppdistroV1Release": { diff --git a/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json b/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json index ed6eca015f5..ba5c0ebd2cb 100644 --- a/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json +++ b/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json @@ -351,7 +351,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://firebasedatabase.googleapis.com/", "schemas": { "DatabaseInstance": { diff --git a/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json b/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json index c1cf0dd0556..571ecaba0ae 100644 --- a/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json +++ b/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json @@ -224,7 +224,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://firebasedynamiclinks.googleapis.com/", "schemas": { "AnalyticsInfo": { diff --git a/googleapiclient/discovery_cache/documents/firebasehosting.v1.json b/googleapiclient/discovery_cache/documents/firebasehosting.v1.json index 5abeb985878..2198f78f690 100644 --- a/googleapiclient/discovery_cache/documents/firebasehosting.v1.json +++ b/googleapiclient/discovery_cache/documents/firebasehosting.v1.json @@ -269,7 +269,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://firebasehosting.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json b/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json index d22972040ce..e2ae2b3b605 100644 --- a/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json @@ -2422,7 +2422,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://firebasehosting.googleapis.com/", "schemas": { "ActingUser": { diff --git a/googleapiclient/discovery_cache/documents/firebaseml.v1.json b/googleapiclient/discovery_cache/documents/firebaseml.v1.json index 0570530d705..384397d44c5 100644 --- a/googleapiclient/discovery_cache/documents/firebaseml.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseml.v1.json @@ -204,7 +204,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://firebaseml.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json b/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json index c955a85eb7e..f370802ac7d 100644 --- a/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json @@ -318,7 +318,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://firebaseml.googleapis.com/", "schemas": { "DownloadModelResponse": { diff --git a/googleapiclient/discovery_cache/documents/firebaseml.v2beta.json b/googleapiclient/discovery_cache/documents/firebaseml.v2beta.json index a033f08fabe..e10ca4a7cf0 100644 --- a/googleapiclient/discovery_cache/documents/firebaseml.v2beta.json +++ b/googleapiclient/discovery_cache/documents/firebaseml.v2beta.json @@ -206,7 +206,7 @@ } } }, -"revision": "20240524", +"revision": "20240531", "rootUrl": "https://firebaseml.googleapis.com/", "schemas": { "Blob": { diff --git a/googleapiclient/discovery_cache/documents/firebaserules.v1.json b/googleapiclient/discovery_cache/documents/firebaserules.v1.json index 1d41d0624ac..a4ce00f0b55 100644 --- a/googleapiclient/discovery_cache/documents/firebaserules.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaserules.v1.json @@ -477,7 +477,7 @@ } } }, -"revision": "20240513", +"revision": "20240528", "rootUrl": "https://firebaserules.googleapis.com/", "schemas": { "Arg": { diff --git a/googleapiclient/discovery_cache/documents/firebasestorage.v1beta.json b/googleapiclient/discovery_cache/documents/firebasestorage.v1beta.json index 661f2c55352..3dffbe175ca 100644 --- a/googleapiclient/discovery_cache/documents/firebasestorage.v1beta.json +++ b/googleapiclient/discovery_cache/documents/firebasestorage.v1beta.json @@ -238,7 +238,7 @@ } } }, -"revision": "20240517", +"revision": "20240524", "rootUrl": "https://firebasestorage.googleapis.com/", "schemas": { "AddFirebaseRequest": { diff --git a/googleapiclient/discovery_cache/documents/fitness.v1.json b/googleapiclient/discovery_cache/documents/fitness.v1.json index 38834596c28..6b2be0abd12 100644 --- a/googleapiclient/discovery_cache/documents/fitness.v1.json +++ b/googleapiclient/discovery_cache/documents/fitness.v1.json @@ -832,7 +832,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://fitness.googleapis.com/", "schemas": { "AggregateBucket": { diff --git a/googleapiclient/discovery_cache/documents/forms.v1.json b/googleapiclient/discovery_cache/documents/forms.v1.json index 0148b536311..0409e3a9c3a 100644 --- a/googleapiclient/discovery_cache/documents/forms.v1.json +++ b/googleapiclient/discovery_cache/documents/forms.v1.json @@ -423,7 +423,7 @@ } } }, -"revision": "20240507", +"revision": "20240521", "rootUrl": "https://forms.googleapis.com/", "schemas": { "Answer": { diff --git a/googleapiclient/discovery_cache/documents/gmail.v1.json b/googleapiclient/discovery_cache/documents/gmail.v1.json index 49dc1414e3f..42714ecd65f 100644 --- a/googleapiclient/discovery_cache/documents/gmail.v1.json +++ b/googleapiclient/discovery_cache/documents/gmail.v1.json @@ -3077,7 +3077,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://gmail.googleapis.com/", "schemas": { "AutoForwarding": { diff --git a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json index 8fe7f88112e..e9993430b2d 100644 --- a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json +++ b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json @@ -265,7 +265,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://gmailpostmastertools.googleapis.com/", "schemas": { "DeliveryError": { diff --git a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json index 7969c28f88c..1d369114bb8 100644 --- a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json @@ -265,7 +265,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://gmailpostmastertools.googleapis.com/", "schemas": { "DeliveryError": { diff --git a/googleapiclient/discovery_cache/documents/groupsmigration.v1.json b/googleapiclient/discovery_cache/documents/groupsmigration.v1.json index a92df1cefeb..18c3047964e 100644 --- a/googleapiclient/discovery_cache/documents/groupsmigration.v1.json +++ b/googleapiclient/discovery_cache/documents/groupsmigration.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://groupsmigration.googleapis.com/", "schemas": { "Groups": { diff --git a/googleapiclient/discovery_cache/documents/healthcare.v1.json b/googleapiclient/discovery_cache/documents/healthcare.v1.json index 930ef57fb9f..6e2f91cd77a 100644 --- a/googleapiclient/discovery_cache/documents/healthcare.v1.json +++ b/googleapiclient/discovery_cache/documents/healthcare.v1.json @@ -4554,7 +4554,7 @@ } } }, -"revision": "20240513", +"revision": "20240521", "rootUrl": "https://healthcare.googleapis.com/", "schemas": { "ActivateConsentRequest": { diff --git a/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json b/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json index 110ea9ba981..26308768f43 100644 --- a/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json @@ -5672,7 +5672,7 @@ } } }, -"revision": "20240513", +"revision": "20240521", "rootUrl": "https://healthcare.googleapis.com/", "schemas": { "AccessDeterminationLogConfig": { @@ -7449,10 +7449,6 @@ "description": "String of comma-delimited FHIR resource types. If provided, only resources of the specified resource type(s) are exported.", "type": "string" }, -"bigqueryDestination": { -"$ref": "GoogleCloudHealthcareV1beta1FhirBigQueryDestination", -"description": "The BigQuery output destination. The Cloud Healthcare Service Agent requires two IAM roles on the BigQuery location: `roles/bigquery.dataEditor` and `roles/bigquery.jobUser`. The output is one BigQuery table per resource type. Unlike when setting `BigQueryDestination` for `StreamConfig`, `ExportResources` does not create BigQuery views." -}, "gcsDestination": { "$ref": "GoogleCloudHealthcareV1beta1FhirGcsDestination", "description": "The Cloud Storage output destination. The Healthcare Service Agent account requires the `roles/storage.objectAdmin` role on the Cloud Storage location. The exported outputs are organized by FHIR resource types. The server creates one or more objects per resource type depending on the volume of the resources exported. When there is only one object per resource type, the object name is in the form of `{operation_id})_history_{resource_type}`. When there are multiple objects for a given resource type, the object names are in the form of `{operation_id}_history_{resource_type}-{index}-of-{total}`. Each object contains newline delimited JSON, and each line is a FHIR history bundle containing the history for a single resource." diff --git a/googleapiclient/discovery_cache/documents/homegraph.v1.json b/googleapiclient/discovery_cache/documents/homegraph.v1.json index 1b6883e747c..ddc9110e4e4 100644 --- a/googleapiclient/discovery_cache/documents/homegraph.v1.json +++ b/googleapiclient/discovery_cache/documents/homegraph.v1.json @@ -216,7 +216,7 @@ } } }, -"revision": "20240523", +"revision": "20240529", "rootUrl": "https://homegraph.googleapis.com/", "schemas": { "AgentDeviceId": { diff --git a/googleapiclient/discovery_cache/documents/iam.v1.json b/googleapiclient/discovery_cache/documents/iam.v1.json index 37dfd4ca381..e512c2aa3c5 100644 --- a/googleapiclient/discovery_cache/documents/iam.v1.json +++ b/googleapiclient/discovery_cache/documents/iam.v1.json @@ -3216,7 +3216,7 @@ } } }, -"revision": "20240521", +"revision": "20240530", "rootUrl": "https://iam.googleapis.com/", "schemas": { "AccessRestrictions": { @@ -4053,7 +4053,7 @@ "type": "object" }, "OauthClient": { -"description": "Represents an OauthClient. Used to access Google Cloud resources on behave of a user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud Platform.", +"description": "Represents an OauthClient. Used to access Google Cloud resources on behalf of a Workforce Identity Federation user by using OAuth 2.0 Protocol to obtain an access token from Google Cloud.", "id": "OauthClient", "properties": { "allowedGrantTypes": { @@ -4081,7 +4081,7 @@ "type": "array" }, "allowedScopes": { -"description": "Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account. * `openid`: Associate you with your personal info on Google Cloud. * `email`: See your Google Cloud Account email address.", +"description": "Required. The list of scopes that the OauthClient is allowed to request during OAuth flows. The following scopes are supported: * `https://www.googleapis.com/auth/cloud-platform`: See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account.", "items": { "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/iam.v2.json b/googleapiclient/discovery_cache/documents/iam.v2.json index bfa54168846..7c5d9775ed2 100644 --- a/googleapiclient/discovery_cache/documents/iam.v2.json +++ b/googleapiclient/discovery_cache/documents/iam.v2.json @@ -293,7 +293,7 @@ } } }, -"revision": "20240521", +"revision": "20240530", "rootUrl": "https://iam.googleapis.com/", "schemas": { "CloudControl2SharedOperationsReconciliationOperationMetadata": { @@ -611,6 +611,182 @@ false }, "type": "object" }, +"GoogleIamV3OperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3OperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3alphaOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3alphaOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3betaOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3betaOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3mainOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3mainOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, "GoogleLongrunningOperation": { "description": "This resource represents a long-running operation that is the result of a network API call.", "id": "GoogleLongrunningOperation", diff --git a/googleapiclient/discovery_cache/documents/iam.v2beta.json b/googleapiclient/discovery_cache/documents/iam.v2beta.json index 33218ef0276..969e20facff 100644 --- a/googleapiclient/discovery_cache/documents/iam.v2beta.json +++ b/googleapiclient/discovery_cache/documents/iam.v2beta.json @@ -293,7 +293,7 @@ } } }, -"revision": "20240521", +"revision": "20240530", "rootUrl": "https://iam.googleapis.com/", "schemas": { "CloudControl2SharedOperationsReconciliationOperationMetadata": { @@ -611,6 +611,182 @@ false }, "type": "object" }, +"GoogleIamV3OperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3OperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3alphaOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3alphaOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3betaOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3betaOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"GoogleIamV3mainOperationMetadata": { +"description": "Represents the metadata of the long-running operation.", +"id": "GoogleIamV3mainOperationMetadata", +"properties": { +"apiVersion": { +"description": "Output only. API version used to start the operation.", +"readOnly": true, +"type": "string" +}, +"createTime": { +"description": "Output only. The time the operation was created.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"endTime": { +"description": "Output only. The time the operation finished running.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"requestedCancellation": { +"description": "Output only. Identifies whether the user has requested cancellation of the operation. Operations that have successfully been cancelled have Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"readOnly": true, +"type": "boolean" +}, +"statusMessage": { +"description": "Output only. Human-readable status of the operation, if any.", +"readOnly": true, +"type": "string" +}, +"target": { +"description": "Output only. Server-defined resource path for the target of the", +"readOnly": true, +"type": "string" +}, +"verb": { +"description": "Output only. Name of the verb executed by the operation.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, "GoogleLongrunningOperation": { "description": "This resource represents a long-running operation that is the result of a network API call.", "id": "GoogleLongrunningOperation", diff --git a/googleapiclient/discovery_cache/documents/iamcredentials.v1.json b/googleapiclient/discovery_cache/documents/iamcredentials.v1.json index c71b5f43d15..2e39940312c 100644 --- a/googleapiclient/discovery_cache/documents/iamcredentials.v1.json +++ b/googleapiclient/discovery_cache/documents/iamcredentials.v1.json @@ -226,7 +226,7 @@ } } }, -"revision": "20240515", +"revision": "20240521", "rootUrl": "https://iamcredentials.googleapis.com/", "schemas": { "GenerateAccessTokenRequest": { diff --git a/googleapiclient/discovery_cache/documents/iap.v1.json b/googleapiclient/discovery_cache/documents/iap.v1.json index 7562d102b07..42bb3ea20a4 100644 --- a/googleapiclient/discovery_cache/documents/iap.v1.json +++ b/googleapiclient/discovery_cache/documents/iap.v1.json @@ -682,7 +682,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://iap.googleapis.com/", "schemas": { "AccessDeniedPageSettings": { @@ -1156,7 +1156,7 @@ "type": "string" }, "type": { -"description": "Resource type. Types are defined in IAM's .service files. Valid values for type might be 'gce', 'gcs', 'project', 'account' etc.", +"description": "Resource type. Types are defined in IAM's .service files. Valid values for type might be 'storage_buckets', 'compute_instances', 'resourcemanager_customers', 'billing_accounts', etc.", "type": "string" } }, diff --git a/googleapiclient/discovery_cache/documents/iap.v1beta1.json b/googleapiclient/discovery_cache/documents/iap.v1beta1.json index 4bdf0d62ee8..68fba604abb 100644 --- a/googleapiclient/discovery_cache/documents/iap.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/iap.v1beta1.json @@ -194,7 +194,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://iap.googleapis.com/", "schemas": { "Binding": { diff --git a/googleapiclient/discovery_cache/documents/identitytoolkit.v1.json b/googleapiclient/discovery_cache/documents/identitytoolkit.v1.json index 4ae1e3e09c5..33ecd6795f1 100644 --- a/googleapiclient/discovery_cache/documents/identitytoolkit.v1.json +++ b/googleapiclient/discovery_cache/documents/identitytoolkit.v1.json @@ -1239,7 +1239,7 @@ } } }, -"revision": "20240508", +"revision": "20240522", "rootUrl": "https://identitytoolkit.googleapis.com/", "schemas": { "GoogleCloudIdentitytoolkitV1Argon2Parameters": { diff --git a/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json b/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json index bc40d8da6c1..4d181dc1e28 100644 --- a/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json +++ b/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json @@ -1655,7 +1655,7 @@ } } }, -"revision": "20240508", +"revision": "20240522", "rootUrl": "https://identitytoolkit.googleapis.com/", "schemas": { "GoogleCloudIdentitytoolkitAdminV2AllowByDefault": { @@ -1709,7 +1709,8 @@ "type": "array" }, "codeFlowConfig": { -"$ref": "GoogleCloudIdentitytoolkitAdminV2CodeFlowConfig" +"$ref": "GoogleCloudIdentitytoolkitAdminV2CodeFlowConfig", +"description": "Additional config for Apple for code flow." } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/indexing.v3.json b/googleapiclient/discovery_cache/documents/indexing.v3.json index 73e3bfe3ea6..e12edde16cc 100644 --- a/googleapiclient/discovery_cache/documents/indexing.v3.json +++ b/googleapiclient/discovery_cache/documents/indexing.v3.json @@ -149,7 +149,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://indexing.googleapis.com/", "schemas": { "PublishUrlNotificationResponse": { diff --git a/googleapiclient/discovery_cache/documents/integrations.v1.json b/googleapiclient/discovery_cache/documents/integrations.v1.json index ecc3338a6d2..445b49fc10f 100644 --- a/googleapiclient/discovery_cache/documents/integrations.v1.json +++ b/googleapiclient/discovery_cache/documents/integrations.v1.json @@ -1380,6 +1380,34 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] +}, +"replay": { +"description": "Re-execute an existing execution, with same request parameters and execution strategy", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/integrations/{integrationsId}/executions/{executionsId}:replay", +"httpMethod": "POST", +"id": "integrations.projects.locations.integrations.executions.replay", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The execution resource name. Format: projects/{gcp_project_id}/locations/{location}/integrations/{integration}/executions/{execution_id}", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/integrations/[^/]+/executions/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:replay", +"request": { +"$ref": "GoogleCloudIntegrationsV1alphaReplayExecutionRequest" +}, +"response": { +"$ref": "GoogleCloudIntegrationsV1alphaReplayExecutionResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] } }, "resources": { @@ -3712,7 +3740,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://integrations.googleapis.com/", "schemas": { "CrmlogErrorCode": { @@ -4418,6 +4446,23 @@ false }, "type": "object" }, +"EnterpriseCrmEventbusProtoConditionalFailurePolicies": { +"id": "EnterpriseCrmEventbusProtoConditionalFailurePolicies", +"properties": { +"defaultFailurePolicy": { +"$ref": "EnterpriseCrmEventbusProtoFailurePolicy", +"description": "The default failure policy to be applied if no conditional failure policy matches" +}, +"failurePolicies": { +"description": "The list of failure policies that will be applied to the task in order.", +"items": { +"$ref": "EnterpriseCrmEventbusProtoFailurePolicy" +}, +"type": "array" +} +}, +"type": "object" +}, "EnterpriseCrmEventbusProtoConnectorsConnection": { "id": "EnterpriseCrmEventbusProtoConnectorsConnection", "properties": { @@ -4756,13 +4801,17 @@ false "type": "object" }, "EnterpriseCrmEventbusProtoEventExecutionSnapshot": { -"description": "Contains the snapshot of the event execution for a given checkpoint. Next available id: 13", +"description": "Contains the snapshot of the event execution for a given checkpoint. Next available id: 15", "id": "EnterpriseCrmEventbusProtoEventExecutionSnapshot", "properties": { "checkpointTaskNumber": { "description": "Indicates \"right after which checkpoint task's execution\" this snapshot is taken.", "type": "string" }, +"clientId": { +"description": "Client that the execution snapshot is associated to.", +"type": "string" +}, "conditionResults": { "description": "All of the computed conditions that been calculated.", "items": { @@ -4809,6 +4858,10 @@ false "deprecated": true, "description": "The task name associated with this snapshot. Could be empty.", "type": "string" +}, +"workflowName": { +"description": "Name of the workflow this event execution snapshot belongs to.", +"type": "string" } }, "type": "object" @@ -7800,6 +7853,10 @@ false }, "type": "array" }, +"conditionalFailurePolicies": { +"$ref": "EnterpriseCrmEventbusProtoConditionalFailurePolicies", +"description": "Optional. Determines the number of times the task will be retried on failure and with what retry strategy. This is applicable for synchronous calls to Eventbus alone (Post)." +}, "createTime": { "description": "Auto-generated.", "format": "google-datetime", @@ -9833,6 +9890,24 @@ false }, "type": "object" }, +"GoogleCloudIntegrationsV1alphaConditionalFailurePolicies": { +"description": "Conditional task failur retry strategies", +"id": "GoogleCloudIntegrationsV1alphaConditionalFailurePolicies", +"properties": { +"defaultFailurePolicy": { +"$ref": "GoogleCloudIntegrationsV1alphaFailurePolicy", +"description": "The default failure policy to be applied if no conditional failure policy matches." +}, +"failurePolicies": { +"description": "The list of failure policies that will be applied to the task in order.", +"items": { +"$ref": "GoogleCloudIntegrationsV1alphaFailurePolicy" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudIntegrationsV1alphaConnectionSchemaMetadata": { "description": "Metadata of runtime connection schema.", "id": "GoogleCloudIntegrationsV1alphaConnectionSchemaMetadata", @@ -11780,6 +11855,40 @@ false }, "type": "object" }, +"GoogleCloudIntegrationsV1alphaReplayExecutionRequest": { +"description": "Request for replaying an execution Next ID: 3", +"id": "GoogleCloudIntegrationsV1alphaReplayExecutionRequest", +"properties": { +"replayReason": { +"description": "Optional. The user provided reason for replaying the execution.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudIntegrationsV1alphaReplayExecutionResponse": { +"description": "Response for replaying an execution Next ID: 4", +"id": "GoogleCloudIntegrationsV1alphaReplayExecutionResponse", +"properties": { +"executionId": { +"description": "The id of the execution corresponding to this run of integration.", +"type": "string" +}, +"outputParameters": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "OUTPUT parameters in format of Map. Where Key is the name of the parameter. The parameters would only be present in case of synchrounous execution Note: Name of the system generated parameters are wrapped by backtick(`) to distinguish them from the user defined parameters.", +"type": "object" +}, +"replayedExecutionId": { +"description": "The execution id which is replayed", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudIntegrationsV1alphaResolveSuspensionRequest": { "description": "Request for [Suspensions.ResolveSuspensions].", "id": "GoogleCloudIntegrationsV1alphaResolveSuspensionRequest", @@ -12236,6 +12345,10 @@ false "description": "The task configuration details. This is not the implementation of Task. There might be multiple TaskConfigs for the same Task.", "id": "GoogleCloudIntegrationsV1alphaTaskConfig", "properties": { +"conditionalFailurePolicies": { +"$ref": "GoogleCloudIntegrationsV1alphaConditionalFailurePolicies", +"description": "Optional. The list of conditional failure policies that will be applied to the task in order." +}, "description": { "description": "Optional. User-provided description intended to give additional business context about the task.", "type": "string" diff --git a/googleapiclient/discovery_cache/documents/keep.v1.json b/googleapiclient/discovery_cache/documents/keep.v1.json index c4c47eddd24..8240cb894a7 100644 --- a/googleapiclient/discovery_cache/documents/keep.v1.json +++ b/googleapiclient/discovery_cache/documents/keep.v1.json @@ -314,7 +314,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://keep.googleapis.com/", "schemas": { "Attachment": { diff --git a/googleapiclient/discovery_cache/documents/kgsearch.v1.json b/googleapiclient/discovery_cache/documents/kgsearch.v1.json index d3fb4735806..3223876512d 100644 --- a/googleapiclient/discovery_cache/documents/kgsearch.v1.json +++ b/googleapiclient/discovery_cache/documents/kgsearch.v1.json @@ -151,7 +151,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://kgsearch.googleapis.com/", "schemas": { "SearchResponse": { diff --git a/googleapiclient/discovery_cache/documents/language.v1.json b/googleapiclient/discovery_cache/documents/language.v1.json index 93e186e4b83..a752c40457d 100644 --- a/googleapiclient/discovery_cache/documents/language.v1.json +++ b/googleapiclient/discovery_cache/documents/language.v1.json @@ -246,7 +246,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/language.v1beta2.json b/googleapiclient/discovery_cache/documents/language.v1beta2.json index dff1a510ab2..2fc9c4ea889 100644 --- a/googleapiclient/discovery_cache/documents/language.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/language.v1beta2.json @@ -246,7 +246,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/language.v2.json b/googleapiclient/discovery_cache/documents/language.v2.json index b1f1483c547..946ab16a0f9 100644 --- a/googleapiclient/discovery_cache/documents/language.v2.json +++ b/googleapiclient/discovery_cache/documents/language.v2.json @@ -208,7 +208,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/libraryagent.v1.json b/googleapiclient/discovery_cache/documents/libraryagent.v1.json index 027889744c8..3350b41ec5f 100644 --- a/googleapiclient/discovery_cache/documents/libraryagent.v1.json +++ b/googleapiclient/discovery_cache/documents/libraryagent.v1.json @@ -279,7 +279,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://libraryagent.googleapis.com/", "schemas": { "GoogleExampleLibraryagentV1Book": { diff --git a/googleapiclient/discovery_cache/documents/licensing.v1.json b/googleapiclient/discovery_cache/documents/licensing.v1.json index 4b1a8b29f29..c31cf2664cf 100644 --- a/googleapiclient/discovery_cache/documents/licensing.v1.json +++ b/googleapiclient/discovery_cache/documents/licensing.v1.json @@ -400,7 +400,7 @@ } } }, -"revision": "20240524", +"revision": "20240601", "rootUrl": "https://licensing.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json b/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json index 4665082fdac..7fa6f8f8813 100644 --- a/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json +++ b/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json @@ -312,7 +312,7 @@ } } }, -"revision": "20240426", +"revision": "20240524", "rootUrl": "https://lifesciences.googleapis.com/", "schemas": { "Accelerator": { diff --git a/googleapiclient/discovery_cache/documents/localservices.v1.json b/googleapiclient/discovery_cache/documents/localservices.v1.json index b771b116e3e..931c55ed905 100644 --- a/googleapiclient/discovery_cache/documents/localservices.v1.json +++ b/googleapiclient/discovery_cache/documents/localservices.v1.json @@ -250,7 +250,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://localservices.googleapis.com/", "schemas": { "GoogleAdsHomeservicesLocalservicesV1AccountReport": { diff --git a/googleapiclient/discovery_cache/documents/logging.v2.json b/googleapiclient/discovery_cache/documents/logging.v2.json index 0795f4dc00c..a1475d149fe 100644 --- a/googleapiclient/discovery_cache/documents/logging.v2.json +++ b/googleapiclient/discovery_cache/documents/logging.v2.json @@ -8132,7 +8132,7 @@ } } }, -"revision": "20240503", +"revision": "20240523", "rootUrl": "https://logging.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json b/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json index e7906d29ae9..6abc1f93f93 100644 --- a/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json @@ -15,7 +15,7 @@ "baseUrl": "https://marketingplatformadmin.googleapis.com/", "batchPath": "batch", "canonicalName": "Google Marketing Platform Admin API", -"description": "The Google Marketing Platform Admin API allows for programmatic access to the Google Marketing Platform configuration data. You can use the Google Marketing Platform Admin API to manage links between your Google Marketing Platform organization and Google Analytics accounts, set the service level of your GA4 properties.", +"description": "The Google Marketing Platform Admin API allows for programmatic access to the Google Marketing Platform configuration data. You can use the Google Marketing Platform Admin API to manage links between your Google Marketing Platform organization and Google Analytics accounts, and to set the service level of your GA4 properties.", "discoveryVersion": "v1", "documentationLink": "https://developers.google.com/analytics/devguides/config/gmp/v1", "fullyEncodeReservedExpansion": true, @@ -263,7 +263,7 @@ } } }, -"revision": "20240522", +"revision": "20240603", "rootUrl": "https://marketingplatformadmin.googleapis.com/", "schemas": { "AnalyticsAccountLink": { diff --git a/googleapiclient/discovery_cache/documents/migrationcenter.v1.json b/googleapiclient/discovery_cache/documents/migrationcenter.v1.json index 560280f5f66..d0e89fb2bcc 100644 --- a/googleapiclient/discovery_cache/documents/migrationcenter.v1.json +++ b/googleapiclient/discovery_cache/documents/migrationcenter.v1.json @@ -2309,7 +2309,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://migrationcenter.googleapis.com/", "schemas": { "AddAssetsToGroupRequest": { diff --git a/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json b/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json index f51db3c354d..32edbca96dc 100644 --- a/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json @@ -541,6 +541,162 @@ } } }, +"assetsExportJobs": { +"methods": { +"create": { +"description": "Creates a new assets export job.", +"flatPath": "v1alpha1/projects/{projectsId}/locations/{locationsId}/assetsExportJobs", +"httpMethod": "POST", +"id": "migrationcenter.projects.locations.assetsExportJobs.create", +"parameterOrder": [ +"parent" +], +"parameters": { +"assetsExportJobId": { +"description": "Required. The ID to use for the asset export job.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The parent resource where the assts export job will be created.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +}, +"requestId": { +"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", +"location": "query", +"type": "string" +} +}, +"path": "v1alpha1/{+parent}/assetsExportJobs", +"request": { +"$ref": "AssetsExportJob" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes an assets export job.", +"flatPath": "v1alpha1/projects/{projectsId}/locations/{locationsId}/assetsExportJobs/{assetsExportJobsId}", +"httpMethod": "DELETE", +"id": "migrationcenter.projects.locations.assetsExportJobs.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The name of the assets export job to delete.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/assetsExportJobs/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha1/{+name}", +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets the details of an assets export job.", +"flatPath": "v1alpha1/projects/{projectsId}/locations/{locationsId}/assetsExportJobs/{assetsExportJobsId}", +"httpMethod": "GET", +"id": "migrationcenter.projects.locations.assetsExportJobs.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Name of the resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/assetsExportJobs/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha1/{+name}", +"response": { +"$ref": "AssetsExportJob" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists all the assets export jobs in a given project and location.", +"flatPath": "v1alpha1/projects/{projectsId}/locations/{locationsId}/assetsExportJobs", +"httpMethod": "GET", +"id": "migrationcenter.projects.locations.assetsExportJobs.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. Requested page size. The server may return fewer items than requested. If unspecified, the server will pick an appropriate default value.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. A token identifying a page of results that the server should return.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. Parent resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha1/{+parent}/assetsExportJobs", +"response": { +"$ref": "ListAssetsExportJobsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"run": { +"description": "Runs an assets export job, returning an AssetsExportJobExecution.", +"flatPath": "v1alpha1/projects/{projectsId}/locations/{locationsId}/assetsExportJobs/{assetsExportJobsId}:run", +"httpMethod": "POST", +"id": "migrationcenter.projects.locations.assetsExportJobs.run", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Name of the resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/assetsExportJobs/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1alpha1/{+name}:run", +"request": { +"$ref": "RunAssetsExportJobRequest" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +}, "discoveryClients": { "methods": { "create": { @@ -2317,7 +2473,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://migrationcenter.googleapis.com/", "schemas": { "AddAssetsToGroupRequest": { @@ -2724,6 +2880,132 @@ }, "type": "object" }, +"AssetsExportJob": { +"description": "Assets export job message.", +"id": "AssetsExportJob", +"properties": { +"condition": { +"$ref": "AssetsExportJobExportCondition", +"description": "Optional. Conditions for selecting assets to export." +}, +"createTime": { +"description": "Output only. Resource creation time.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"labels": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Labels as key value pairs. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.", +"type": "object" +}, +"name": { +"description": "Output only. Identifier. Resource name.", +"readOnly": true, +"type": "string" +}, +"networkDependencies": { +"$ref": "AssetsExportJobNetworkDependencies", +"description": "Export data regarding asset network dependencies." +}, +"recentExecutions": { +"description": "Output only. Recent non expired executions of the job.", +"items": { +"$ref": "AssetsExportJobExecution" +}, +"readOnly": true, +"type": "array" +}, +"signedUriDestination": { +"$ref": "SignedUriDestination", +"description": "Export to Cloud Storage files downloadable using signed URIs." +}, +"updateTime": { +"description": "Output only. Resource update time.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"AssetsExportJobExecution": { +"description": "Execution status of assets export job.", +"id": "AssetsExportJobExecution", +"properties": { +"endTime": { +"description": "Output only. Completion time of the export.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"executionId": { +"description": "Output only. Globally unique identifier of the execution.", +"readOnly": true, +"type": "string" +}, +"expireTime": { +"description": "Output only. Expiration time for the export and artifacts.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +}, +"result": { +"$ref": "AssetsExportJobExecutionResult", +"description": "Output only. Result of the export execution.", +"readOnly": true +}, +"startTime": { +"description": "Output only. Execution timestamp.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"AssetsExportJobExecutionResult": { +"description": "Contains the result of the assets export.", +"id": "AssetsExportJobExecutionResult", +"properties": { +"error": { +"$ref": "Status", +"description": "Output only. Error encountered during export.", +"readOnly": true +}, +"signedUris": { +"$ref": "SignedUris", +"description": "Output only. Signed URLs for downloading export artifacts.", +"readOnly": true +} +}, +"type": "object" +}, +"AssetsExportJobExportCondition": { +"description": "Conditions for selecting assets to export.", +"id": "AssetsExportJobExportCondition", +"properties": { +"filter": { +"description": "Optional. Assets filter, supports the same syntax as asset listing.", +"type": "string" +} +}, +"type": "object" +}, +"AssetsExportJobNetworkDependencies": { +"description": "Configuration for network dependencies exports.", +"id": "AssetsExportJobNetworkDependencies", +"properties": { +"maxDays": { +"description": "Optional. When this value is set to a positive integer, network connections data will be returned for the most recent days for which data is available. When this value is unset (or set to zero), all available data is returned.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, "AwsEc2PlatformDetails": { "description": "AWS EC2 specific details.", "id": "AwsEc2PlatformDetails", @@ -3147,7 +3429,7 @@ "id": "ComputeEnginePreferences", "properties": { "licenseType": { -"description": "Overridden by os_pricing_preferences if specified. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan.", +"description": "If os_pricing_preferences is specified, it overrides this field. License type to consider when calculating costs for virtual machine insights and recommendations. If unspecified, costs are calculated based on the default licensing plan.", "enum": [ "LICENSE_TYPE_UNSPECIFIED", "LICENSE_TYPE_DEFAULT", @@ -5073,6 +5355,26 @@ false }, "type": "object" }, +"ListAssetsExportJobsResponse": { +"description": "Response message for listing assets export jobs.", +"id": "ListAssetsExportJobsResponse", +"properties": { +"assetsExportJobs": { +"description": "Output only. The list of assets export jobs.", +"items": { +"$ref": "AssetsExportJob" +}, +"readOnly": true, +"type": "array" +}, +"nextPageToken": { +"description": "Output only. A token identifying a page of results the server should return.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, "ListAssetsResponse": { "description": "Response message for listing assets.", "id": "ListAssetsResponse", @@ -6877,6 +7179,29 @@ false }, "type": "object" }, +"RunAssetsExportJobRequest": { +"description": "A request to run an assets export job.", +"id": "RunAssetsExportJobRequest", +"properties": { +"requestId": { +"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", +"type": "string" +} +}, +"type": "object" +}, +"RunAssetsExportJobResponse": { +"description": "Response message for running an assets export job.", +"id": "RunAssetsExportJobResponse", +"properties": { +"assetsExportJobExecution": { +"$ref": "AssetsExportJobExecution", +"description": "Output only. Execution status of the assets export operation.", +"readOnly": true +} +}, +"type": "object" +}, "RunImportJobRequest": { "description": "A request to run an import job.", "id": "RunImportJobRequest", @@ -7055,6 +7380,44 @@ false }, "type": "object" }, +"SignedUri": { +"description": "Contains a signed URI.", +"id": "SignedUri", +"properties": { +"file": { +"description": "Output only. Name of the file the Signed URI references.", +"readOnly": true, +"type": "string" +}, +"uri": { +"description": "Output only. Download URI for the file.", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"SignedUriDestination": { +"description": "Signed URI destination configuration.", +"id": "SignedUriDestination", +"properties": {}, +"type": "object" +}, +"SignedUris": { +"description": "Contains a list of Signed URIs.", +"id": "SignedUris", +"properties": { +"signedUris": { +"description": "Output only. List of signed URIs.", +"items": { +"$ref": "SignedUri" +}, +"readOnly": true, +"type": "array" +} +}, +"type": "object" +}, "SoftwareInsight": { "description": "An insight regarding software detected on an asset.", "id": "SoftwareInsight", diff --git a/googleapiclient/discovery_cache/documents/monitoring.v1.json b/googleapiclient/discovery_cache/documents/monitoring.v1.json index eeda2395d95..c94f1659f74 100644 --- a/googleapiclient/discovery_cache/documents/monitoring.v1.json +++ b/googleapiclient/discovery_cache/documents/monitoring.v1.json @@ -753,7 +753,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://monitoring.googleapis.com/", "schemas": { "Aggregation": { diff --git a/googleapiclient/discovery_cache/documents/monitoring.v3.json b/googleapiclient/discovery_cache/documents/monitoring.v3.json index f4080e1c52b..56748535399 100644 --- a/googleapiclient/discovery_cache/documents/monitoring.v3.json +++ b/googleapiclient/discovery_cache/documents/monitoring.v3.json @@ -2714,7 +2714,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://monitoring.googleapis.com/", "schemas": { "Aggregation": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json b/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json index 191fee3fc8f..8b24ba38e52 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json @@ -530,7 +530,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessaccountmanagement.googleapis.com/", "schemas": { "AcceptInvitationRequest": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json b/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json index 75b670aab11..ee71120579a 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json @@ -612,7 +612,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessbusinessinformation.googleapis.com/", "schemas": { "AdWordsLocationExtensions": { diff --git a/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json b/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json index c50d030a483..f664a2278b7 100644 --- a/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json @@ -194,7 +194,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinesslodging.googleapis.com/", "schemas": { "Accessibility": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json b/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json index 50b59ed9371..f632b509a7d 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json @@ -154,7 +154,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessnotifications.googleapis.com/", "schemas": { "NotificationSetting": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json b/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json index 4f7a3379cec..ed8b69d4afe 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json @@ -281,7 +281,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessplaceactions.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json b/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json index 56ce7b2af40..440c2d46099 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json @@ -323,7 +323,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessqanda.googleapis.com/", "schemas": { "Answer": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json b/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json index 7655107f6e1..c1add92c455 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json @@ -237,7 +237,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://mybusinessverifications.googleapis.com/", "schemas": { "AddressVerificationData": { diff --git a/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json b/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json index cd305e91e7a..2bbb75f8015 100644 --- a/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json +++ b/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json @@ -950,7 +950,7 @@ "type": "string" }, "requestId": { -"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes since the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", +"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server knows to ignore the request if it has already been completed. The server guarantees that for at least 60 minutes since the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, ignores the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", "location": "query", "type": "string" } @@ -983,7 +983,7 @@ "type": "string" }, "requestId": { -"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", +"description": "Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server knows to ignore the request if it has already been completed. The server guarantees that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, ignores the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).", "location": "query", "type": "string" } @@ -2812,7 +2812,7 @@ } } }, -"revision": "20240508", +"revision": "20240523", "rootUrl": "https://networkconnectivity.googleapis.com/", "schemas": { "AcceptHubSpokeRequest": { @@ -2945,6 +2945,14 @@ "description": "The consumer project where PSC connections are allowed to be created in.", "type": "string" }, +"serviceAttachmentIpAddressMap": { +"additionalProperties": { +"type": "string" +}, +"description": "Output only. A map to store mapping between customer vip and target service attachment. Only service attachment with producer specified ip addresses are stored here.", +"readOnly": true, +"type": "object" +}, "state": { "description": "Output only. Overall state of PSC Connections management for this consumer psc config.", "enum": [ @@ -3095,7 +3103,7 @@ "type": "string" }, "protocolVersion": { -"description": "Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported.", +"description": "Required. Internet protocol versions this policy-based route applies to. For this version, only IPV4 is supported. IPV6 is supported in preview.", "enum": [ "PROTOCOL_VERSION_UNSPECIFIED", "IPV4" @@ -4018,6 +4026,63 @@ }, "type": "object" }, +"NextHopInterconnectAttachment": { +"description": "A route next hop that leads to an interconnect attachment resource.", +"id": "NextHopInterconnectAttachment", +"properties": { +"siteToSiteDataTransfer": { +"description": "Indicates whether site-to-site data transfer is allowed for this interconnect attachment resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations).", +"type": "boolean" +}, +"uri": { +"description": "The URI of the interconnect attachment resource.", +"type": "string" +}, +"vpcNetwork": { +"description": "The VPC network where this interconnect attachment is located.", +"type": "string" +} +}, +"type": "object" +}, +"NextHopRouterApplianceInstance": { +"description": "A route next hop that leads to a Router appliance instance.", +"id": "NextHopRouterApplianceInstance", +"properties": { +"siteToSiteDataTransfer": { +"description": "Indicates whether site-to-site data transfer is allowed for this Router appliance instance resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations).", +"type": "boolean" +}, +"uri": { +"description": "The URI of the Router appliance instance.", +"type": "string" +}, +"vpcNetwork": { +"description": "The VPC network where this VM is located.", +"type": "string" +} +}, +"type": "object" +}, +"NextHopVPNTunnel": { +"description": "A route next hop that leads to a VPN tunnel resource.", +"id": "NextHopVPNTunnel", +"properties": { +"siteToSiteDataTransfer": { +"description": "Indicates whether site-to-site data transfer is allowed for this VPN tunnel resource. Data transfer is available only in [supported locations](https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/locations).", +"type": "boolean" +}, +"uri": { +"description": "The URI of the VPN tunnel resource.", +"type": "string" +}, +"vpcNetwork": { +"description": "The VPC network where this VPN tunnel is located.", +"type": "string" +} +}, +"type": "object" +}, "NextHopVpcNetwork": { "id": "NextHopVpcNetwork", "properties": { @@ -4104,7 +4169,7 @@ "type": "object" }, "PolicyBasedRoute": { -"description": "Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always take precedence.", +"description": "Policy-based routes route L4 network traffic based on not just destination IP address, but also source IP address, protocol, and more. If a policy-based route conflicts with other types of routes, the policy-based route always takes precedence.", "id": "PolicyBasedRoute", "properties": { "createTime": { @@ -4157,7 +4222,7 @@ ], "enumDescriptions": [ "Default value.", -"Use the routes from the default routing tables (system-generated routes, custom routes, peering route) to determine the next hop. This will effectively exclude matching packets being applied on other PBRs with a lower priority." +"Use the routes from the default routing tables (system-generated routes, custom routes, peering route) to determine the next hop. This effectively excludes matching packets being applied on other PBRs with a lower priority." ], "type": "string" }, @@ -4179,7 +4244,7 @@ }, "virtualMachine": { "$ref": "VirtualMachine", -"description": "Optional. VM instances to which this policy-based route applies to." +"description": "Optional. VM instances that this policy-based route applies to." }, "warnings": { "description": "Output only. If potential misconfigurations are detected for this route, this field will be populated with warning messages.", @@ -4442,10 +4507,28 @@ "description": "Immutable. The name of the route. Route names must be unique. Route names use the following form: `projects/{project_number}/locations/global/hubs/{hub}/routeTables/{route_table_id}/routes/{route_id}`", "type": "string" }, +"nextHopInterconnectAttachment": { +"$ref": "NextHopInterconnectAttachment", +"description": "Immutable. The next-hop VLAN attachment for packets on this route." +}, +"nextHopRouterApplianceInstance": { +"$ref": "NextHopRouterApplianceInstance", +"description": "Immutable. The next-hop Router appliance instance for packets on this route." +}, "nextHopVpcNetwork": { "$ref": "NextHopVpcNetwork", "description": "Immutable. The destination VPC network for packets on this route." }, +"nextHopVpnTunnel": { +"$ref": "NextHopVPNTunnel", +"description": "Immutable. The next-hop VPN tunnel for packets on this route." +}, +"priority": { +"description": "Output only. The priority of this route. Priority is used to break ties in cases where a destination matches more than one route. In these cases the route with the lowest-numbered priority value wins.", +"format": "int64", +"readOnly": true, +"type": "string" +}, "spoke": { "description": "Immutable. The spoke that this route leads to. Example: projects/12345/locations/global/spokes/SPOKE", "type": "string" @@ -4482,12 +4565,14 @@ "enum": [ "ROUTE_TYPE_UNSPECIFIED", "VPC_PRIMARY_SUBNET", -"VPC_SECONDARY_SUBNET" +"VPC_SECONDARY_SUBNET", +"DYNAMIC_ROUTE" ], "enumDescriptions": [ "No route type information specified", "The route leads to a destination within the primary address range of the VPC network's subnet.", -"The route leads to a destination within the secondary address range of the VPC network's subnet." +"The route leads to a destination within the secondary address range of the VPC network's subnet.", +"The route leads to a destination in a dynamic route. Dynamic routes are derived from Border Gateway Protocol (BGP) advertisements received from an NCC hybrid spoke." ], "readOnly": true, "type": "string" @@ -5185,11 +5270,11 @@ "type": "object" }, "VirtualMachine": { -"description": "VM instances to which this policy-based route applies to.", +"description": "VM instances that this policy-based route applies to.", "id": "VirtualMachine", "properties": { "tags": { -"description": "Optional. A list of VM instance tags the this policy-based route applies to. VM instances that have ANY of tags specified here will install this PBR.", +"description": "Optional. A list of VM instance tags that this policy-based route applies to. VM instances that have ANY of tags specified here installs this PBR.", "items": { "type": "string" }, @@ -5211,7 +5296,7 @@ ], "enumDescriptions": [ "Default value.", -"The policy-based route is not active and functioning. Common causes are the dependent network was deleted or the resource project was turned off.", +"The policy-based route is not active and functioning. Common causes are that the dependent network was deleted or the resource project was turned off.", "The policy-based route is being modified (e.g. created/deleted) at this time." ], "readOnly": true, diff --git a/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json b/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json index 6449d05eb90..e41b66f0088 100644 --- a/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json @@ -1116,7 +1116,7 @@ } } }, -"revision": "20240508", +"revision": "20240523", "rootUrl": "https://networkconnectivity.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/networkmanagement.v1.json b/googleapiclient/discovery_cache/documents/networkmanagement.v1.json index 25aaed1505e..b3b30343439 100644 --- a/googleapiclient/discovery_cache/documents/networkmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/networkmanagement.v1.json @@ -591,7 +591,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://networkmanagement.googleapis.com/", "schemas": { "AbortInfo": { diff --git a/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json b/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json index e7e32c003c8..d562c80dd56 100644 --- a/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json @@ -591,7 +591,7 @@ } } }, -"revision": "20240515", +"revision": "20240522", "rootUrl": "https://networkmanagement.googleapis.com/", "schemas": { "AbortInfo": { diff --git a/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json b/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json index f2b74104da8..512d84419bc 100644 --- a/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json +++ b/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json @@ -339,7 +339,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://ondemandscanning.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json b/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json index c1aa345e3e7..388de7faf7c 100644 --- a/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json @@ -339,7 +339,7 @@ } } }, -"revision": "20240520", +"revision": "20240527", "rootUrl": "https://ondemandscanning.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/orgpolicy.v2.json b/googleapiclient/discovery_cache/documents/orgpolicy.v2.json index 75563bf2968..97d3697efc4 100644 --- a/googleapiclient/discovery_cache/documents/orgpolicy.v2.json +++ b/googleapiclient/discovery_cache/documents/orgpolicy.v2.json @@ -930,7 +930,7 @@ } } }, -"revision": "20240520", +"revision": "20240524", "rootUrl": "https://orgpolicy.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1.json b/googleapiclient/discovery_cache/documents/osconfig.v1.json index a6651f1432f..58126d54e94 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1.json @@ -1083,7 +1083,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "AptSettings": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json b/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json index cfb2f351611..22d54cbf034 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json @@ -707,7 +707,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "CVSSv3": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1beta.json b/googleapiclient/discovery_cache/documents/osconfig.v1beta.json index 1bcbfa3e46a..158614ca5ad 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1beta.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1beta.json @@ -689,7 +689,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "AptRepository": { diff --git a/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json b/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json index e5ac781a3c7..f9e48ff7a46 100644 --- a/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json +++ b/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json @@ -201,7 +201,7 @@ false } } }, -"revision": "20240523", +"revision": "20240531", "rootUrl": "https://pagespeedonline.googleapis.com/", "schemas": { "AuditRefs": { diff --git a/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json b/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json index f4ab5fb25b1..25cf868c009 100644 --- a/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json +++ b/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json @@ -435,7 +435,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://paymentsresellersubscription.googleapis.com/", "schemas": { "GoogleCloudPaymentsResellerSubscriptionV1Amount": { diff --git a/googleapiclient/discovery_cache/documents/people.v1.json b/googleapiclient/discovery_cache/documents/people.v1.json index 73a0aba3def..823be8df137 100644 --- a/googleapiclient/discovery_cache/documents/people.v1.json +++ b/googleapiclient/discovery_cache/documents/people.v1.json @@ -1190,7 +1190,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://people.googleapis.com/", "schemas": { "Address": { diff --git a/googleapiclient/discovery_cache/documents/places.v1.json b/googleapiclient/discovery_cache/documents/places.v1.json index 021f0c642cb..956da97c4a8 100644 --- a/googleapiclient/discovery_cache/documents/places.v1.json +++ b/googleapiclient/discovery_cache/documents/places.v1.json @@ -276,7 +276,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://places.googleapis.com/", "schemas": { "GoogleGeoTypeViewport": { diff --git a/googleapiclient/discovery_cache/documents/playcustomapp.v1.json b/googleapiclient/discovery_cache/documents/playcustomapp.v1.json index dde93abe993..3c82524d882 100644 --- a/googleapiclient/discovery_cache/documents/playcustomapp.v1.json +++ b/googleapiclient/discovery_cache/documents/playcustomapp.v1.json @@ -158,7 +158,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://playcustomapp.googleapis.com/", "schemas": { "CustomApp": { diff --git a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json index 03890eff886..2a594f19dce 100644 --- a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json @@ -947,7 +947,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://playdeveloperreporting.googleapis.com/", "schemas": { "GooglePlayDeveloperReportingV1alpha1Anomaly": { diff --git a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json index 3736e4d17b4..86f96946c52 100644 --- a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json @@ -947,7 +947,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://playdeveloperreporting.googleapis.com/", "schemas": { "GooglePlayDeveloperReportingV1beta1Anomaly": { diff --git a/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json b/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json index f244a735f4b..b7e576113d8 100644 --- a/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json @@ -177,7 +177,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://playgrouping.googleapis.com/", "schemas": { "CreateOrUpdateTagsRequest": { diff --git a/googleapiclient/discovery_cache/documents/playintegrity.v1.json b/googleapiclient/discovery_cache/documents/playintegrity.v1.json index 1f1c755d45e..86b7a2c8740 100644 --- a/googleapiclient/discovery_cache/documents/playintegrity.v1.json +++ b/googleapiclient/discovery_cache/documents/playintegrity.v1.json @@ -138,7 +138,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://playintegrity.googleapis.com/", "schemas": { "AccountActivity": { diff --git a/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json b/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json index e91173ddea2..66bdc558b9a 100644 --- a/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json +++ b/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json @@ -105,6 +105,120 @@ }, "protocol": "rest", "resources": { +"folders": { +"resources": { +"locations": { +"resources": { +"activityTypes": { +"resources": { +"activities": { +"methods": { +"query": { +"description": "Queries policy activities on Google Cloud resources.", +"flatPath": "v1/folders/{foldersId}/locations/{locationsId}/activityTypes/{activityTypesId}/activities:query", +"httpMethod": "GET", +"id": "policyanalyzer.folders.locations.activityTypes.activities.query", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. Filter expression to restrict the activities returned. For serviceAccountLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account. For serviceAccountKeyLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account key.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to Google Cloud Locations: https://cloud.google.com/about/locations/", +"location": "path", +"pattern": "^folders/[^/]+/locations/[^/]+/activityTypes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/activities:query", +"response": { +"$ref": "GoogleCloudPolicyanalyzerV1QueryActivityResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +} +} +} +} +} +} +}, +"organizations": { +"resources": { +"locations": { +"resources": { +"activityTypes": { +"resources": { +"activities": { +"methods": { +"query": { +"description": "Queries policy activities on Google Cloud resources.", +"flatPath": "v1/organizations/{organizationsId}/locations/{locationsId}/activityTypes/{activityTypesId}/activities:query", +"httpMethod": "GET", +"id": "policyanalyzer.organizations.locations.activityTypes.activities.query", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. Filter expression to restrict the activities returned. For serviceAccountLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account. For serviceAccountKeyLastAuthentication activities, supported filters are: - `activities.full_resource_name {=} [STRING]` - `activities.fullResourceName {=} [STRING]` where `[STRING]` is the full resource name of the service account key.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to Google Cloud Locations: https://cloud.google.com/about/locations/", +"location": "path", +"pattern": "^organizations/[^/]+/locations/[^/]+/activityTypes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/activities:query", +"response": { +"$ref": "GoogleCloudPolicyanalyzerV1QueryActivityResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +} +} +} +} +} +} +}, "projects": { "resources": { "locations": { @@ -163,10 +277,11 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://policyanalyzer.googleapis.com/", "schemas": { "GoogleCloudPolicyanalyzerV1Activity": { +"description": "Represents Activity on a GCP resource over specific observation period.", "id": "GoogleCloudPolicyanalyzerV1Activity", "properties": { "activity": { diff --git a/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json b/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json index 6ebfcd85617..5420621e859 100644 --- a/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json @@ -105,6 +105,120 @@ }, "protocol": "rest", "resources": { +"folders": { +"resources": { +"locations": { +"resources": { +"activityTypes": { +"resources": { +"activities": { +"methods": { +"query": { +"description": "Queries policy activities on GCP resources.", +"flatPath": "v1beta1/folders/{foldersId}/locations/{locationsId}/activityTypes/{activityTypesId}/activities:query", +"httpMethod": "GET", +"id": "policyanalyzer.folders.locations.activityTypes.activities.query", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. Optional filter expression to restrict the activities returned. Supported filters are: - service_account_last_authn.full_resource_name {=} - service_account_key_last_authn.full_resource_name {=} ", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to GCP Locations: https://cloud.google.com/about/locations/", +"location": "path", +"pattern": "^folders/[^/]+/locations/[^/]+/activityTypes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+parent}/activities:query", +"response": { +"$ref": "GoogleCloudPolicyanalyzerV1beta1QueryActivityResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +} +} +} +} +} +} +}, +"organizations": { +"resources": { +"locations": { +"resources": { +"activityTypes": { +"resources": { +"activities": { +"methods": { +"query": { +"description": "Queries policy activities on GCP resources.", +"flatPath": "v1beta1/organizations/{organizationsId}/locations/{locationsId}/activityTypes/{activityTypesId}/activities:query", +"httpMethod": "GET", +"id": "policyanalyzer.organizations.locations.activityTypes.activities.query", +"parameterOrder": [ +"parent" +], +"parameters": { +"filter": { +"description": "Optional. Optional filter expression to restrict the activities returned. Supported filters are: - service_account_last_authn.full_resource_name {=} - service_account_key_last_authn.full_resource_name {=} ", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. The maximum number of results to return from this request. Max limit is 1000. Non-positive values are ignored. The presence of `nextPageToken` in the response indicates that more results might be available.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. If present, then retrieve the next batch of results from the preceding call to this method. `pageToken` must be the value of `nextPageToken` from the previous response. The values of other method parameters should be identical to those in the previous call.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The container resource on which to execute the request. Acceptable formats: `projects/[PROJECT_ID|PROJECT_NUMBER]/locations/[LOCATION]/activityTypes/[ACTIVITY_TYPE]` LOCATION here refers to GCP Locations: https://cloud.google.com/about/locations/", +"location": "path", +"pattern": "^organizations/[^/]+/locations/[^/]+/activityTypes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+parent}/activities:query", +"response": { +"$ref": "GoogleCloudPolicyanalyzerV1beta1QueryActivityResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +} +} +} +} +} +} +}, "projects": { "resources": { "locations": { @@ -163,10 +277,11 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://policyanalyzer.googleapis.com/", "schemas": { "GoogleCloudPolicyanalyzerV1beta1Activity": { +"description": "Represents Activity on a GCP resource over specific observation period.", "id": "GoogleCloudPolicyanalyzerV1beta1Activity", "properties": { "activity": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1.json b/googleapiclient/discovery_cache/documents/policysimulator.v1.json index 9faca1d65cf..71ec6c6265d 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1.json @@ -942,7 +942,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json b/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json index 8df5475b2b0..76bf2cfd17e 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json @@ -1078,7 +1078,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json b/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json index 63c7a7dcf73..40449be6d92 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json @@ -1078,7 +1078,7 @@ } } }, -"revision": "20240519", +"revision": "20240526", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json index 6f867cdab50..d49a974a4e4 100644 --- a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json +++ b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://policytroubleshooter.googleapis.com/", "schemas": { "GoogleCloudPolicytroubleshooterV1AccessTuple": { diff --git a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json index 7c15cb3720a..9cbf1dcf723 100644 --- a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json +++ b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://policytroubleshooter.googleapis.com/", "schemas": { "GoogleCloudPolicytroubleshooterV1betaAccessTuple": { diff --git a/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json b/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json index dd747e54e13..8f6ce8d6c81 100644 --- a/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json @@ -2653,7 +2653,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://prod-tt-sasportal.googleapis.com/", "schemas": { "SasPortalAssignment": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1.json b/googleapiclient/discovery_cache/documents/publicca.v1.json index 6ea1d8aa939..dce19ffdc15 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240521", +"revision": "20240527", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json b/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json index 268d9209e4e..5148deda63e 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240521", +"revision": "20240527", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1beta1.json b/googleapiclient/discovery_cache/documents/publicca.v1beta1.json index 5490fecf8f7..22ead52d253 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1beta1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240521", +"revision": "20240527", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1.json b/googleapiclient/discovery_cache/documents/pubsub.v1.json index 33e5fbbb041..876c3f6c812 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1.json @@ -1583,7 +1583,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json b/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json index fcaf9d3c323..75a221d605d 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json @@ -474,7 +474,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json b/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json index 7ae2ace277d..948fa17e9eb 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json @@ -741,7 +741,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsublite.v1.json b/googleapiclient/discovery_cache/documents/pubsublite.v1.json index 8c021eff161..078cfb3ca8e 100644 --- a/googleapiclient/discovery_cache/documents/pubsublite.v1.json +++ b/googleapiclient/discovery_cache/documents/pubsublite.v1.json @@ -1040,7 +1040,7 @@ } } }, -"revision": "20240510", +"revision": "20240524", "rootUrl": "https://pubsublite.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json b/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json index d6a527d3316..a8b97ffa99b 100644 --- a/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json +++ b/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json @@ -207,7 +207,7 @@ } } }, -"revision": "20240526", +"revision": "20240528", "rootUrl": "https://readerrevenuesubscriptionlinking.googleapis.com/", "schemas": { "DeleteReaderResponse": { diff --git a/googleapiclient/discovery_cache/documents/realtimebidding.v1.json b/googleapiclient/discovery_cache/documents/realtimebidding.v1.json index fccbb891451..32897352604 100644 --- a/googleapiclient/discovery_cache/documents/realtimebidding.v1.json +++ b/googleapiclient/discovery_cache/documents/realtimebidding.v1.json @@ -1305,7 +1305,7 @@ } } }, -"revision": "20240523", +"revision": "20240603", "rootUrl": "https://realtimebidding.googleapis.com/", "schemas": { "ActivatePretargetingConfigRequest": { diff --git a/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json b/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json index 8306c846c49..38a8ef2e16f 100644 --- a/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json +++ b/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json @@ -694,7 +694,7 @@ } } }, -"revision": "20240518", +"revision": "20240526", "rootUrl": "https://recaptchaenterprise.googleapis.com/", "schemas": { "GoogleCloudRecaptchaenterpriseV1AccountDefenderAssessment": { @@ -1553,7 +1553,7 @@ true "properties": { "smsTollFraudVerdict": { "$ref": "GoogleCloudRecaptchaenterpriseV1SmsTollFraudVerdict", -"description": "Output only. Assessment of this phone event for risk of sms toll fraud.", +"description": "Output only. Assessment of this phone event for risk of SMS toll fraud.", "readOnly": true } }, @@ -1781,7 +1781,7 @@ true "type": "object" }, "GoogleCloudRecaptchaenterpriseV1SmsTollFraudVerdict": { -"description": "Information about sms toll fraud", +"description": "Information about SMS toll fraud.", "id": "GoogleCloudRecaptchaenterpriseV1SmsTollFraudVerdict", "properties": { "reasons": { @@ -1801,7 +1801,7 @@ true "type": "array" }, "risk": { -"description": "Output only. Probability of an sms event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest).", +"description": "Output only. Probability of an SMS event being fraudulent. Values are from 0.0 (lowest) to 1.0 (highest).", "format": "float", "readOnly": true, "type": "number" diff --git a/googleapiclient/discovery_cache/documents/recommendationengine.v1beta1.json b/googleapiclient/discovery_cache/documents/recommendationengine.v1beta1.json index 91779cc18ee..186b0ee9644 100644 --- a/googleapiclient/discovery_cache/documents/recommendationengine.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/recommendationengine.v1beta1.json @@ -841,7 +841,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://recommendationengine.googleapis.com/", "schemas": { "GoogleApiHttpBody": { diff --git a/googleapiclient/discovery_cache/documents/reseller.v1.json b/googleapiclient/discovery_cache/documents/reseller.v1.json index f76fa40badd..9c6b8b0e4dc 100644 --- a/googleapiclient/discovery_cache/documents/reseller.v1.json +++ b/googleapiclient/discovery_cache/documents/reseller.v1.json @@ -651,7 +651,7 @@ } } }, -"revision": "20240523", +"revision": "20240531", "rootUrl": "https://reseller.googleapis.com/", "schemas": { "Address": { diff --git a/googleapiclient/discovery_cache/documents/resourcesettings.v1.json b/googleapiclient/discovery_cache/documents/resourcesettings.v1.json index 1cddb2f9b17..8f729d27b1d 100644 --- a/googleapiclient/discovery_cache/documents/resourcesettings.v1.json +++ b/googleapiclient/discovery_cache/documents/resourcesettings.v1.json @@ -108,8 +108,10 @@ "folders": { "resources": { "settings": { +"deprecated": true, "methods": { "get": { +"deprecated": true, "description": "Returns a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist.", "flatPath": "v1/folders/{foldersId}/settings/{settingsId}", "httpMethod": "GET", @@ -152,6 +154,7 @@ ] }, "list": { +"deprecated": true, "description": "Lists all the settings that are available on the Cloud resource `parent`.", "flatPath": "v1/folders/{foldersId}/settings", "httpMethod": "GET", @@ -205,6 +208,7 @@ ] }, "patch": { +"deprecated": true, "description": "Updates a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist. Returns a `google.rpc.Status` with `google.rpc.Code.FAILED_PRECONDITION` if the setting is flagged as read only. Returns a `google.rpc.Status` with `google.rpc.Code.ABORTED` if the etag supplied in the request does not match the persisted etag of the setting value. On success, the response will contain only `name`, `local_value` and `etag`. The `metadata` and `effective_value` cannot be updated through this API. Note: the supplied setting will perform a full overwrite of the `local_value` field.", "flatPath": "v1/folders/{foldersId}/settings/{settingsId}", "httpMethod": "PATCH", @@ -239,8 +243,10 @@ "organizations": { "resources": { "settings": { +"deprecated": true, "methods": { "get": { +"deprecated": true, "description": "Returns a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist.", "flatPath": "v1/organizations/{organizationsId}/settings/{settingsId}", "httpMethod": "GET", @@ -283,6 +289,7 @@ ] }, "list": { +"deprecated": true, "description": "Lists all the settings that are available on the Cloud resource `parent`.", "flatPath": "v1/organizations/{organizationsId}/settings", "httpMethod": "GET", @@ -336,6 +343,7 @@ ] }, "patch": { +"deprecated": true, "description": "Updates a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist. Returns a `google.rpc.Status` with `google.rpc.Code.FAILED_PRECONDITION` if the setting is flagged as read only. Returns a `google.rpc.Status` with `google.rpc.Code.ABORTED` if the etag supplied in the request does not match the persisted etag of the setting value. On success, the response will contain only `name`, `local_value` and `etag`. The `metadata` and `effective_value` cannot be updated through this API. Note: the supplied setting will perform a full overwrite of the `local_value` field.", "flatPath": "v1/organizations/{organizationsId}/settings/{settingsId}", "httpMethod": "PATCH", @@ -370,8 +378,10 @@ "projects": { "resources": { "settings": { +"deprecated": true, "methods": { "get": { +"deprecated": true, "description": "Returns a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist.", "flatPath": "v1/projects/{projectsId}/settings/{settingsId}", "httpMethod": "GET", @@ -414,6 +424,7 @@ ] }, "list": { +"deprecated": true, "description": "Lists all the settings that are available on the Cloud resource `parent`.", "flatPath": "v1/projects/{projectsId}/settings", "httpMethod": "GET", @@ -467,6 +478,7 @@ ] }, "patch": { +"deprecated": true, "description": "Updates a specified setting. Returns a `google.rpc.Status` with `google.rpc.Code.NOT_FOUND` if the setting does not exist. Returns a `google.rpc.Status` with `google.rpc.Code.FAILED_PRECONDITION` if the setting is flagged as read only. Returns a `google.rpc.Status` with `google.rpc.Code.ABORTED` if the etag supplied in the request does not match the persisted etag of the setting value. On success, the response will contain only `name`, `local_value` and `etag`. The `metadata` and `effective_value` cannot be updated through this API. Note: the supplied setting will perform a full overwrite of the `local_value` field.", "flatPath": "v1/projects/{projectsId}/settings/{settingsId}", "httpMethod": "PATCH", @@ -499,7 +511,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://resourcesettings.googleapis.com/", "schemas": { "GoogleCloudResourcesettingsV1ListSettingsResponse": { diff --git a/googleapiclient/discovery_cache/documents/run.v1.json b/googleapiclient/discovery_cache/documents/run.v1.json index ea54d31b8dc..40eba0ca5a5 100644 --- a/googleapiclient/discovery_cache/documents/run.v1.json +++ b/googleapiclient/discovery_cache/documents/run.v1.json @@ -377,7 +377,7 @@ ] }, "list": { -"description": "List configurations.", +"description": "List configurations. Results are sorted by creation time, descending.", "flatPath": "apis/serving.knative.dev/v1/namespaces/{namespacesId}/configurations", "httpMethod": "GET", "id": "run.namespaces.configurations.list", @@ -703,7 +703,7 @@ ] }, "list": { -"description": "List executions.", +"description": "List executions. Results are sorted by creation time, descending.", "flatPath": "apis/run.googleapis.com/v1/namespaces/{namespacesId}/executions", "httpMethod": "GET", "id": "run.namespaces.executions.list", @@ -861,7 +861,7 @@ ] }, "list": { -"description": "List jobs.", +"description": "List jobs. Results are sorted by creation time, descending.", "flatPath": "apis/run.googleapis.com/v1/namespaces/{namespacesId}/jobs", "httpMethod": "GET", "id": "run.namespaces.jobs.list", @@ -1052,7 +1052,7 @@ ] }, "list": { -"description": "List revisions.", +"description": "List revisions. Results are sorted by creation time, descending.", "flatPath": "apis/serving.knative.dev/v1/namespaces/{namespacesId}/revisions", "httpMethod": "GET", "id": "run.namespaces.revisions.list", @@ -1142,7 +1142,7 @@ ] }, "list": { -"description": "List routes.", +"description": "List routes. Results are sorted by creation time, descending.", "flatPath": "apis/serving.knative.dev/v1/namespaces/{namespacesId}/routes", "httpMethod": "GET", "id": "run.namespaces.routes.list", @@ -1310,7 +1310,7 @@ ] }, "list": { -"description": "Lists services for the given project and region.", +"description": "Lists services for the given project and region. Results are sorted by creation time, descending.", "flatPath": "apis/serving.knative.dev/v1/namespaces/{namespacesId}/services", "httpMethod": "GET", "id": "run.namespaces.services.list", @@ -1652,7 +1652,7 @@ ] }, "list": { -"description": "List configurations.", +"description": "List configurations. Results are sorted by creation time, descending.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/configurations", "httpMethod": "GET", "id": "run.projects.locations.configurations.list", @@ -2169,7 +2169,7 @@ ] }, "list": { -"description": "List revisions.", +"description": "List revisions. Results are sorted by creation time, descending.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/revisions", "httpMethod": "GET", "id": "run.projects.locations.revisions.list", @@ -2259,7 +2259,7 @@ ] }, "list": { -"description": "List routes.", +"description": "List routes. Results are sorted by creation time, descending.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/routes", "httpMethod": "GET", "id": "run.projects.locations.routes.list", @@ -2458,7 +2458,7 @@ ] }, "list": { -"description": "Lists services for the given project and region.", +"description": "Lists services for the given project and region. Results are sorted by creation time, descending.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/services", "httpMethod": "GET", "id": "run.projects.locations.services.list", @@ -2614,7 +2614,7 @@ } } }, -"revision": "20240510", +"revision": "20240531", "rootUrl": "https://run.googleapis.com/", "schemas": { "Addressable": { @@ -3625,7 +3625,7 @@ }, "source": { "$ref": "GoogleDevtoolsCloudbuildV1Source", -"description": "The location of the source files to build." +"description": "Optional. The location of the source files to build." }, "sourceProvenance": { "$ref": "GoogleDevtoolsCloudbuildV1SourceProvenance", @@ -3785,7 +3785,7 @@ "type": "string" }, "diskSizeGb": { -"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 2000GB; builds that request more than the maximum are rejected with an error.", +"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 4000GB; builds that request more than the maximum are rejected with an error.", "format": "int64", "type": "string" }, @@ -4092,7 +4092,7 @@ false "id": "GoogleDevtoolsCloudbuildV1ConnectedRepository", "properties": { "dir": { -"description": "Directory, relative to the source root, in which to run the build.", +"description": "Optional. Directory, relative to the source root, in which to run the build.", "type": "string" }, "repository": { @@ -4100,7 +4100,7 @@ false "type": "string" }, "revision": { -"description": "The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref.", +"description": "Required. The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref.", "type": "string" } }, @@ -4208,15 +4208,15 @@ false "id": "GoogleDevtoolsCloudbuildV1GitSource", "properties": { "dir": { -"description": "Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", +"description": "Optional. Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", "type": "string" }, "revision": { -"description": "The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref. Cloud Build uses `git fetch` to fetch the revision from the Git repository; therefore make sure that the string you provide for `revision` is parsable by the command. For information on string values accepted by `git fetch`, see https://git-scm.com/docs/gitrevisions#_specifying_revisions. For information on `git fetch`, see https://git-scm.com/docs/git-fetch.", +"description": "Optional. The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref. Cloud Build uses `git fetch` to fetch the revision from the Git repository; therefore make sure that the string you provide for `revision` is parsable by the command. For information on string values accepted by `git fetch`, see https://git-scm.com/docs/gitrevisions#_specifying_revisions. For information on `git fetch`, see https://git-scm.com/docs/git-fetch.", "type": "string" }, "url": { -"description": "Location of the Git repo to build. This will be used as a `git remote`, see https://git-scm.com/docs/git-remote.", +"description": "Required. Location of the Git repo to build. This will be used as a `git remote`, see https://git-scm.com/docs/git-remote.", "type": "string" } }, @@ -4368,26 +4368,26 @@ false "type": "string" }, "dir": { -"description": "Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", +"description": "Optional. Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", "type": "string" }, "invertRegex": { -"description": "Only trigger a build if the revision regex does NOT match the revision regex.", +"description": "Optional. Only trigger a build if the revision regex does NOT match the revision regex.", "type": "boolean" }, "projectId": { -"description": "ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.", +"description": "Optional. ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.", "type": "string" }, "repoName": { -"description": "Name of the Cloud Source Repository.", +"description": "Required. Name of the Cloud Source Repository.", "type": "string" }, "substitutions": { "additionalProperties": { "type": "string" }, -"description": "Substitutions to use in a triggered build. Should only be used with RunBuildTrigger", +"description": "Optional. Substitutions to use in a triggered build. Should only be used with RunBuildTrigger", "type": "object" }, "tagName": { @@ -4592,12 +4592,12 @@ false "type": "string" }, "generation": { -"description": "Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used.", +"description": "Optional. Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used.", "format": "int64", "type": "string" }, "object": { -"description": "Cloud Storage object containing the source. This object must be a zipped (`.zip`) or gzipped archive file (`.tar.gz`) containing source to build.", +"description": "Required. Cloud Storage object containing the source. This object must be a zipped (`.zip`) or gzipped archive file (`.tar.gz`) containing source to build.", "type": "string" }, "sourceFetcher": { @@ -4622,7 +4622,7 @@ false "id": "GoogleDevtoolsCloudbuildV1StorageSourceManifest", "properties": { "bucket": { -"description": "Cloud Storage bucket containing the source manifest (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)).", +"description": "Required. Cloud Storage bucket containing the source manifest (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)).", "type": "string" }, "generation": { @@ -4631,7 +4631,7 @@ false "type": "string" }, "object": { -"description": "Cloud Storage object containing the source manifest. This object must be a JSON file.", +"description": "Required. Cloud Storage object containing the source manifest. This object must be a JSON file.", "type": "string" } }, diff --git a/googleapiclient/discovery_cache/documents/run.v2.json b/googleapiclient/discovery_cache/documents/run.v2.json index d0a5e1c5453..2f92aca11bf 100644 --- a/googleapiclient/discovery_cache/documents/run.v2.json +++ b/googleapiclient/discovery_cache/documents/run.v2.json @@ -523,7 +523,7 @@ ] }, "list": { -"description": "Lists Jobs.", +"description": "Lists Jobs. Results are sorted by creation time, descending.", "flatPath": "v2/projects/{projectsId}/locations/{locationsId}/jobs", "httpMethod": "GET", "id": "run.projects.locations.jobs.list", @@ -811,7 +811,7 @@ ] }, "list": { -"description": "Lists Executions from a Job.", +"description": "Lists Executions from a Job. Results are sorted by creation time, descending.", "flatPath": "v2/projects/{projectsId}/locations/{locationsId}/jobs/{jobsId}/executions", "httpMethod": "GET", "id": "run.projects.locations.jobs.executions.list", @@ -1182,7 +1182,7 @@ ] }, "list": { -"description": "Lists Services.", +"description": "Lists Services. Results are sorted by creation time, descending.", "flatPath": "v2/projects/{projectsId}/locations/{locationsId}/services", "httpMethod": "GET", "id": "run.projects.locations.services.list", @@ -1420,7 +1420,7 @@ ] }, "list": { -"description": "Lists Revisions from a given Service, or from a given location.", +"description": "Lists Revisions from a given Service, or from a given location. Results are sorted by creation time, descending.", "flatPath": "v2/projects/{projectsId}/locations/{locationsId}/services/{servicesId}/revisions", "httpMethod": "GET", "id": "run.projects.locations.services.revisions.list", @@ -1469,7 +1469,7 @@ } } }, -"revision": "20240510", +"revision": "20240531", "rootUrl": "https://run.googleapis.com/", "schemas": { "GoogleCloudRunV2BinaryAuthorization": { @@ -2366,6 +2366,10 @@ "readOnly": true, "type": "boolean" }, +"runExecutionToken": { +"description": "A unique string used as a suffix for creating a new execution. The Job will become ready when the execution is successfully completed. The sum of job name and token length must be fewer than 63 characters.", +"type": "string" +}, "satisfiesPzs": { "description": "Output only. Reserved for future use.", "readOnly": true, @@ -3913,7 +3917,7 @@ }, "source": { "$ref": "GoogleDevtoolsCloudbuildV1Source", -"description": "The location of the source files to build." +"description": "Optional. The location of the source files to build." }, "sourceProvenance": { "$ref": "GoogleDevtoolsCloudbuildV1SourceProvenance", @@ -4073,7 +4077,7 @@ "type": "string" }, "diskSizeGb": { -"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 2000GB; builds that request more than the maximum are rejected with an error.", +"description": "Requested disk size for the VM that runs the build. Note that this is *NOT* \"disk free\"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 4000GB; builds that request more than the maximum are rejected with an error.", "format": "int64", "type": "string" }, @@ -4380,7 +4384,7 @@ false "id": "GoogleDevtoolsCloudbuildV1ConnectedRepository", "properties": { "dir": { -"description": "Directory, relative to the source root, in which to run the build.", +"description": "Optional. Directory, relative to the source root, in which to run the build.", "type": "string" }, "repository": { @@ -4388,7 +4392,7 @@ false "type": "string" }, "revision": { -"description": "The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref.", +"description": "Required. The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref.", "type": "string" } }, @@ -4496,15 +4500,15 @@ false "id": "GoogleDevtoolsCloudbuildV1GitSource", "properties": { "dir": { -"description": "Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", +"description": "Optional. Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", "type": "string" }, "revision": { -"description": "The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref. Cloud Build uses `git fetch` to fetch the revision from the Git repository; therefore make sure that the string you provide for `revision` is parsable by the command. For information on string values accepted by `git fetch`, see https://git-scm.com/docs/gitrevisions#_specifying_revisions. For information on `git fetch`, see https://git-scm.com/docs/git-fetch.", +"description": "Optional. The revision to fetch from the Git repository such as a branch, a tag, a commit SHA, or any Git ref. Cloud Build uses `git fetch` to fetch the revision from the Git repository; therefore make sure that the string you provide for `revision` is parsable by the command. For information on string values accepted by `git fetch`, see https://git-scm.com/docs/gitrevisions#_specifying_revisions. For information on `git fetch`, see https://git-scm.com/docs/git-fetch.", "type": "string" }, "url": { -"description": "Location of the Git repo to build. This will be used as a `git remote`, see https://git-scm.com/docs/git-remote.", +"description": "Required. Location of the Git repo to build. This will be used as a `git remote`, see https://git-scm.com/docs/git-remote.", "type": "string" } }, @@ -4656,26 +4660,26 @@ false "type": "string" }, "dir": { -"description": "Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", +"description": "Optional. Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's `dir` is specified and is an absolute path, this value is ignored for that step's execution.", "type": "string" }, "invertRegex": { -"description": "Only trigger a build if the revision regex does NOT match the revision regex.", +"description": "Optional. Only trigger a build if the revision regex does NOT match the revision regex.", "type": "boolean" }, "projectId": { -"description": "ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.", +"description": "Optional. ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.", "type": "string" }, "repoName": { -"description": "Name of the Cloud Source Repository.", +"description": "Required. Name of the Cloud Source Repository.", "type": "string" }, "substitutions": { "additionalProperties": { "type": "string" }, -"description": "Substitutions to use in a triggered build. Should only be used with RunBuildTrigger", +"description": "Optional. Substitutions to use in a triggered build. Should only be used with RunBuildTrigger", "type": "object" }, "tagName": { @@ -4880,12 +4884,12 @@ false "type": "string" }, "generation": { -"description": "Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used.", +"description": "Optional. Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used.", "format": "int64", "type": "string" }, "object": { -"description": "Cloud Storage object containing the source. This object must be a zipped (`.zip`) or gzipped archive file (`.tar.gz`) containing source to build.", +"description": "Required. Cloud Storage object containing the source. This object must be a zipped (`.zip`) or gzipped archive file (`.tar.gz`) containing source to build.", "type": "string" }, "sourceFetcher": { @@ -4910,7 +4914,7 @@ false "id": "GoogleDevtoolsCloudbuildV1StorageSourceManifest", "properties": { "bucket": { -"description": "Cloud Storage bucket containing the source manifest (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)).", +"description": "Required. Cloud Storage bucket containing the source manifest (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/bucket-naming#requirements)).", "type": "string" }, "generation": { @@ -4919,7 +4923,7 @@ false "type": "string" }, "object": { -"description": "Cloud Storage object containing the source manifest. This object must be a JSON file.", +"description": "Required. Cloud Storage object containing the source manifest. This object must be a JSON file.", "type": "string" } }, diff --git a/googleapiclient/discovery_cache/documents/script.v1.json b/googleapiclient/discovery_cache/documents/script.v1.json index 6a04a7b2f5e..bde26dcc916 100644 --- a/googleapiclient/discovery_cache/documents/script.v1.json +++ b/googleapiclient/discovery_cache/documents/script.v1.json @@ -891,7 +891,7 @@ } } }, -"revision": "20240519", +"revision": "20240527", "rootUrl": "https://script.googleapis.com/", "schemas": { "Content": { diff --git a/googleapiclient/discovery_cache/documents/searchconsole.v1.json b/googleapiclient/discovery_cache/documents/searchconsole.v1.json index 914a6be042c..5deb32699b9 100644 --- a/googleapiclient/discovery_cache/documents/searchconsole.v1.json +++ b/googleapiclient/discovery_cache/documents/searchconsole.v1.json @@ -400,7 +400,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://searchconsole.googleapis.com/", "schemas": { "AmpInspectionResult": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1.json b/googleapiclient/discovery_cache/documents/secretmanager.v1.json index 7b99c8dbcb9..58a61b96d2b 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1.json @@ -35,6 +35,61 @@ "description": "Regional Endpoint", "endpointUrl": "https://secretmanager.us-east1.rep.googleapis.com/", "location": "us-east1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-central2.rep.googleapis.com/", +"location": "us-central2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west1.rep.googleapis.com/", +"location": "us-west1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west2.rep.googleapis.com/", +"location": "us-west2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west3.rep.googleapis.com/", +"location": "us-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west4.rep.googleapis.com/", +"location": "us-west4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east4.rep.googleapis.com/", +"location": "us-east4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east5.rep.googleapis.com/", +"location": "us-east5" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-south1.rep.googleapis.com/", +"location": "us-south1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west3.rep.googleapis.com/", +"location": "europe-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west8.rep.googleapis.com/", +"location": "europe-west8" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west9.rep.googleapis.com/", +"location": "europe-west9" } ], "fullyEncodeReservedExpansion": true, @@ -1130,7 +1185,7 @@ } } }, -"revision": "20240523", +"revision": "20240527", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json b/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json index bf13c712ef6..fc5f8ae2b2a 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json @@ -35,6 +35,61 @@ "description": "Regional Endpoint", "endpointUrl": "https://secretmanager.us-east1.rep.googleapis.com/", "location": "us-east1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-central2.rep.googleapis.com/", +"location": "us-central2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west1.rep.googleapis.com/", +"location": "us-west1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west2.rep.googleapis.com/", +"location": "us-west2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west3.rep.googleapis.com/", +"location": "us-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west4.rep.googleapis.com/", +"location": "us-west4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east4.rep.googleapis.com/", +"location": "us-east4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east5.rep.googleapis.com/", +"location": "us-east5" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-south1.rep.googleapis.com/", +"location": "us-south1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west3.rep.googleapis.com/", +"location": "europe-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west8.rep.googleapis.com/", +"location": "europe-west8" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west9.rep.googleapis.com/", +"location": "europe-west9" } ], "fullyEncodeReservedExpansion": true, @@ -650,7 +705,7 @@ } } }, -"revision": "20240523", +"revision": "20240527", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json b/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json index 507b312eb30..8ad0f0aa2a3 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json @@ -35,6 +35,61 @@ "description": "Regional Endpoint", "endpointUrl": "https://secretmanager.us-east1.rep.googleapis.com/", "location": "us-east1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-central2.rep.googleapis.com/", +"location": "us-central2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west1.rep.googleapis.com/", +"location": "us-west1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west2.rep.googleapis.com/", +"location": "us-west2" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west3.rep.googleapis.com/", +"location": "us-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-west4.rep.googleapis.com/", +"location": "us-west4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east4.rep.googleapis.com/", +"location": "us-east4" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-east5.rep.googleapis.com/", +"location": "us-east5" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.us-south1.rep.googleapis.com/", +"location": "us-south1" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west3.rep.googleapis.com/", +"location": "europe-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west8.rep.googleapis.com/", +"location": "europe-west8" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.europe-west9.rep.googleapis.com/", +"location": "europe-west9" } ], "fullyEncodeReservedExpansion": true, @@ -1130,7 +1185,7 @@ } } }, -"revision": "20240523", +"revision": "20240527", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1.json b/googleapiclient/discovery_cache/documents/securitycenter.v1.json index a08e9997271..d3f37913fee 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1.json @@ -6027,7 +6027,7 @@ } } }, -"revision": "20240520", +"revision": "20240531", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json b/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json index d5e5be9d202..73e33f033ae 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json @@ -896,7 +896,7 @@ } } }, -"revision": "20240520", +"revision": "20240531", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json b/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json index 8362587e1cc..fe3accef837 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json @@ -1906,7 +1906,7 @@ } } }, -"revision": "20240520", +"revision": "20240531", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/servicecontrol.v1.json b/googleapiclient/discovery_cache/documents/servicecontrol.v1.json index f7dda91b7da..a413d09d0e1 100644 --- a/googleapiclient/discovery_cache/documents/servicecontrol.v1.json +++ b/googleapiclient/discovery_cache/documents/servicecontrol.v1.json @@ -197,7 +197,7 @@ } } }, -"revision": "20240510", +"revision": "20240524", "rootUrl": "https://servicecontrol.googleapis.com/", "schemas": { "AllocateInfo": { @@ -1289,6 +1289,7 @@ "description": "Properties of the object.", "type": "any" }, +"deprecated": true, "description": "Optional. Resource payload that is currently in scope and is subjected to orgpolicy conditions. This payload may be the subset of the actual Resource that may come in the request. This payload should not contain any core content.", "type": "object" }, @@ -1296,6 +1297,7 @@ "additionalProperties": { "type": "string" }, +"deprecated": true, "description": "Optional. Tags referenced on the resource at the time of evaluation. These also include the federated tags, if they are supplied in the CheckOrgPolicy or CheckCustomConstraints Requests. Optional field as of now. These tags are the Cloud tags that are available on the resource during the policy evaluation and will be available as part of the OrgPolicy check response for logging purposes.", "type": "object" }, diff --git a/googleapiclient/discovery_cache/documents/servicecontrol.v2.json b/googleapiclient/discovery_cache/documents/servicecontrol.v2.json index 887fbd62646..5a87c1a3615 100644 --- a/googleapiclient/discovery_cache/documents/servicecontrol.v2.json +++ b/googleapiclient/discovery_cache/documents/servicecontrol.v2.json @@ -169,7 +169,7 @@ } } }, -"revision": "20240510", +"revision": "20240524", "rootUrl": "https://servicecontrol.googleapis.com/", "schemas": { "Api": { @@ -529,6 +529,7 @@ "description": "Properties of the object.", "type": "any" }, +"deprecated": true, "description": "Optional. Resource payload that is currently in scope and is subjected to orgpolicy conditions. This payload may be the subset of the actual Resource that may come in the request. This payload should not contain any core content.", "type": "object" }, @@ -536,6 +537,7 @@ "additionalProperties": { "type": "string" }, +"deprecated": true, "description": "Optional. Tags referenced on the resource at the time of evaluation. These also include the federated tags, if they are supplied in the CheckOrgPolicy or CheckCustomConstraints Requests. Optional field as of now. These tags are the Cloud tags that are available on the resource during the policy evaluation and will be available as part of the OrgPolicy check response for logging purposes.", "type": "object" }, diff --git a/googleapiclient/discovery_cache/documents/servicedirectory.v1.json b/googleapiclient/discovery_cache/documents/servicedirectory.v1.json index 58d38ea8ce2..6069468a989 100644 --- a/googleapiclient/discovery_cache/documents/servicedirectory.v1.json +++ b/googleapiclient/discovery_cache/documents/servicedirectory.v1.json @@ -883,7 +883,7 @@ } } }, -"revision": "20240516", +"revision": "20240526", "rootUrl": "https://servicedirectory.googleapis.com/", "schemas": { "Binding": { diff --git a/googleapiclient/discovery_cache/documents/servicedirectory.v1beta1.json b/googleapiclient/discovery_cache/documents/servicedirectory.v1beta1.json index 560e77c2c85..bde46b92788 100644 --- a/googleapiclient/discovery_cache/documents/servicedirectory.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/servicedirectory.v1beta1.json @@ -971,7 +971,7 @@ } } }, -"revision": "20240516", +"revision": "20240526", "rootUrl": "https://servicedirectory.googleapis.com/", "schemas": { "Binding": { diff --git a/googleapiclient/discovery_cache/documents/servicemanagement.v1.json b/googleapiclient/discovery_cache/documents/servicemanagement.v1.json index 1629847ed9d..2cf20dbddea 100644 --- a/googleapiclient/discovery_cache/documents/servicemanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/servicemanagement.v1.json @@ -830,7 +830,7 @@ } } }, -"revision": "20240517", +"revision": "20240524", "rootUrl": "https://servicemanagement.googleapis.com/", "schemas": { "Advice": { diff --git a/googleapiclient/discovery_cache/documents/servicenetworking.v1.json b/googleapiclient/discovery_cache/documents/servicenetworking.v1.json index c1f5db06b3d..f58083171ed 100644 --- a/googleapiclient/discovery_cache/documents/servicenetworking.v1.json +++ b/googleapiclient/discovery_cache/documents/servicenetworking.v1.json @@ -1029,7 +1029,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://servicenetworking.googleapis.com/", "schemas": { "AddDnsRecordSetMetadata": { @@ -2421,7 +2421,7 @@ "type": "object" }, "HttpRule": { -"description": "# gRPC Transcoding gRPC Transcoding is a feature for mapping between a gRPC method and one or more HTTP REST endpoints. It allows developers to build a single API service that supports both gRPC APIs and REST APIs. Many systems, including [Google APIs](https://github.com/googleapis/googleapis), [Cloud Endpoints](https://cloud.google.com/endpoints), [gRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway), and [Envoy](https://github.com/envoyproxy/envoy) proxy support this feature and use it for large scale production services. `HttpRule` defines the schema of the gRPC/REST mapping. The mapping specifies how different portions of the gRPC request message are mapped to the URL path, URL query parameters, and HTTP request body. It also controls how the gRPC response message is mapped to the HTTP response body. `HttpRule` is typically specified as an `google.api.http` annotation on the gRPC method. Each mapping specifies a URL path template and an HTTP method. The path template may refer to one or more fields in the gRPC request message, as long as each field is a non-repeated field with a primitive (non-message) type. The path template controls how fields of the request message are mapped to the URL path. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/{name=messages/*}\" }; } } message GetMessageRequest { string name = 1; // Mapped to URL path. } message Message { string text = 1; // The resource content. } This enables an HTTP REST to gRPC mapping as below: HTTP | gRPC -----|----- `GET /v1/messages/123456` | `GetMessage(name: \"messages/123456\")` Any fields in the request message which are not bound by the path template automatically become HTTP query parameters if there is no HTTP request body. For example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get:\"/v1/messages/{message_id}\" }; } } message GetMessageRequest { message SubMessage { string subfield = 1; } string message_id = 1; // Mapped to URL path. int64 revision = 2; // Mapped to URL query parameter `revision`. SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`. } This enables a HTTP JSON to RPC mapping as below: HTTP | gRPC -----|----- `GET /v1/messages/123456?revision=2&sub.subfield=foo` | `GetMessage(message_id: \"123456\" revision: 2 sub: SubMessage(subfield: \"foo\"))` Note that fields which are mapped to URL query parameters must have a primitive type or a repeated primitive type or a non-repeated message type. In the case of a repeated type, the parameter can be repeated in the URL as `...?param=A¶m=B`. In the case of a message type, each field of the message is mapped to a separate parameter, such as `...?foo.a=A&foo.b=B&foo.c=C`. For HTTP methods that allow a request body, the `body` field specifies the mapping. Consider a REST update method on the message resource collection: service Messaging { rpc UpdateMessage(UpdateMessageRequest) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"message\" }; } } message UpdateMessageRequest { string message_id = 1; // mapped to the URL Message message = 2; // mapped to the body } The following HTTP JSON to RPC mapping is enabled, where the representation of the JSON in the request body is determined by protos JSON encoding: HTTP | gRPC -----|----- `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` | `UpdateMessage(message_id: \"123456\" message { text: \"Hi!\" })` The special name `*` can be used in the body mapping to define that every field not bound by the path template should be mapped to the request body. This enables the following alternative definition of the update method: service Messaging { rpc UpdateMessage(Message) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"*\" }; } } message Message { string message_id = 1; string text = 2; } The following HTTP JSON to RPC mapping is enabled: HTTP | gRPC -----|----- `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` | `UpdateMessage(message_id: \"123456\" text: \"Hi!\")` Note that when using `*` in the body mapping, it is not possible to have HTTP parameters, as all fields not bound by the path end in the body. This makes this option more rarely used in practice when defining REST APIs. The common usage of `*` is in custom methods which don't use the URL at all for transferring data. It is possible to define multiple HTTP methods for one RPC by using the `additional_bindings` option. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/messages/{message_id}\" additional_bindings { get: \"/v1/users/{user_id}/messages/{message_id}\" } }; } } message GetMessageRequest { string message_id = 1; string user_id = 2; } This enables the following two alternative HTTP JSON to RPC mappings: HTTP | gRPC -----|----- `GET /v1/messages/123456` | `GetMessage(message_id: \"123456\")` `GET /v1/users/me/messages/123456` | `GetMessage(user_id: \"me\" message_id: \"123456\")` ## Rules for HTTP mapping 1. Leaf request fields (recursive expansion nested messages in the request message) are classified into three categories: - Fields referred by the path template. They are passed via the URL path. - Fields referred by the HttpRule.body. They are passed via the HTTP request body. - All other fields are passed via the URL query parameters, and the parameter name is the field path in the request message. A repeated field can be represented as multiple query parameters under the same name. 2. If HttpRule.body is \"*\", there is no URL query parameter, all fields are passed via URL path and HTTP request body. 3. If HttpRule.body is omitted, there is no HTTP request body, all fields are passed via URL path and URL query parameters. ### Path template syntax Template = \"/\" Segments [ Verb ] ; Segments = Segment { \"/\" Segment } ; Segment = \"*\" | \"**\" | LITERAL | Variable ; Variable = \"{\" FieldPath [ \"=\" Segments ] \"}\" ; FieldPath = IDENT { \".\" IDENT } ; Verb = \":\" LITERAL ; The syntax `*` matches a single URL path segment. The syntax `**` matches zero or more URL path segments, which must be the last part of the URL path except the `Verb`. The syntax `Variable` matches part of the URL path as specified by its template. A variable template must not contain other variables. If a variable matches a single path segment, its template may be omitted, e.g. `{var}` is equivalent to `{var=*}`. The syntax `LITERAL` matches literal text in the URL path. If the `LITERAL` contains any reserved character, such characters should be percent-encoded before the matching. If a variable contains exactly one path segment, such as `\"{var}\"` or `\"{var=*}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{var}`. If a variable contains multiple path segments, such as `\"{var=foo/*}\"` or `\"{var=**}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~/0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding, except \"%2F\" and \"%2f\" are left unchanged. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{+var}`. ## Using gRPC API Service Configuration gRPC API Service Configuration (service config) is a configuration language for configuring a gRPC service to become a user-facing product. The service config is simply the YAML representation of the `google.api.Service` proto message. As an alternative to annotating your proto file, you can configure gRPC transcoding in your service config YAML files. You do this by specifying a `HttpRule` that maps the gRPC method to a REST endpoint, achieving the same effect as the proto annotation. This can be particularly useful if you have a proto that is reused in multiple services. Note that any transcoding specified in the service config will override any matching transcoding configuration in the proto. Example: http: rules: # Selects a gRPC method and applies HttpRule to it. - selector: example.v1.Messaging.GetMessage get: /v1/messages/{message_id}/{sub.subfield} ## Special notes When gRPC Transcoding is used to map a gRPC to JSON REST endpoints, the proto to JSON conversion must follow the [proto3 specification](https://developers.google.com/protocol-buffers/docs/proto3#json). While the single segment variable follows the semantics of [RFC 6570](https://tools.ietf.org/html/rfc6570) Section 3.2.2 Simple String Expansion, the multi segment variable **does not** follow RFC 6570 Section 3.2.3 Reserved Expansion. The reason is that the Reserved Expansion does not expand special characters like `?` and `#`, which would lead to invalid URLs. As the result, gRPC Transcoding uses a custom encoding for multi segment variables. The path variables **must not** refer to any repeated or mapped field, because client libraries are not capable of handling such variable expansion. The path variables **must not** capture the leading \"/\" character. The reason is that the most common use case \"{var}\" does not capture the leading \"/\" character. For consistency, all path variables must share the same behavior. Repeated message fields must not be mapped to URL query parameters, because no client library can support such complicated mapping. If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature.", +"description": "gRPC Transcoding gRPC Transcoding is a feature for mapping between a gRPC method and one or more HTTP REST endpoints. It allows developers to build a single API service that supports both gRPC APIs and REST APIs. Many systems, including [Google APIs](https://github.com/googleapis/googleapis), [Cloud Endpoints](https://cloud.google.com/endpoints), [gRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway), and [Envoy](https://github.com/envoyproxy/envoy) proxy support this feature and use it for large scale production services. `HttpRule` defines the schema of the gRPC/REST mapping. The mapping specifies how different portions of the gRPC request message are mapped to the URL path, URL query parameters, and HTTP request body. It also controls how the gRPC response message is mapped to the HTTP response body. `HttpRule` is typically specified as an `google.api.http` annotation on the gRPC method. Each mapping specifies a URL path template and an HTTP method. The path template may refer to one or more fields in the gRPC request message, as long as each field is a non-repeated field with a primitive (non-message) type. The path template controls how fields of the request message are mapped to the URL path. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/{name=messages/*}\" }; } } message GetMessageRequest { string name = 1; // Mapped to URL path. } message Message { string text = 1; // The resource content. } This enables an HTTP REST to gRPC mapping as below: - HTTP: `GET /v1/messages/123456` - gRPC: `GetMessage(name: \"messages/123456\")` Any fields in the request message which are not bound by the path template automatically become HTTP query parameters if there is no HTTP request body. For example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get:\"/v1/messages/{message_id}\" }; } } message GetMessageRequest { message SubMessage { string subfield = 1; } string message_id = 1; // Mapped to URL path. int64 revision = 2; // Mapped to URL query parameter `revision`. SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`. } This enables a HTTP JSON to RPC mapping as below: - HTTP: `GET /v1/messages/123456?revision=2&sub.subfield=foo` - gRPC: `GetMessage(message_id: \"123456\" revision: 2 sub: SubMessage(subfield: \"foo\"))` Note that fields which are mapped to URL query parameters must have a primitive type or a repeated primitive type or a non-repeated message type. In the case of a repeated type, the parameter can be repeated in the URL as `...?param=A¶m=B`. In the case of a message type, each field of the message is mapped to a separate parameter, such as `...?foo.a=A&foo.b=B&foo.c=C`. For HTTP methods that allow a request body, the `body` field specifies the mapping. Consider a REST update method on the message resource collection: service Messaging { rpc UpdateMessage(UpdateMessageRequest) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"message\" }; } } message UpdateMessageRequest { string message_id = 1; // mapped to the URL Message message = 2; // mapped to the body } The following HTTP JSON to RPC mapping is enabled, where the representation of the JSON in the request body is determined by protos JSON encoding: - HTTP: `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` - gRPC: `UpdateMessage(message_id: \"123456\" message { text: \"Hi!\" })` The special name `*` can be used in the body mapping to define that every field not bound by the path template should be mapped to the request body. This enables the following alternative definition of the update method: service Messaging { rpc UpdateMessage(Message) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"*\" }; } } message Message { string message_id = 1; string text = 2; } The following HTTP JSON to RPC mapping is enabled: - HTTP: `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` - gRPC: `UpdateMessage(message_id: \"123456\" text: \"Hi!\")` Note that when using `*` in the body mapping, it is not possible to have HTTP parameters, as all fields not bound by the path end in the body. This makes this option more rarely used in practice when defining REST APIs. The common usage of `*` is in custom methods which don't use the URL at all for transferring data. It is possible to define multiple HTTP methods for one RPC by using the `additional_bindings` option. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/messages/{message_id}\" additional_bindings { get: \"/v1/users/{user_id}/messages/{message_id}\" } }; } } message GetMessageRequest { string message_id = 1; string user_id = 2; } This enables the following two alternative HTTP JSON to RPC mappings: - HTTP: `GET /v1/messages/123456` - gRPC: `GetMessage(message_id: \"123456\")` - HTTP: `GET /v1/users/me/messages/123456` - gRPC: `GetMessage(user_id: \"me\" message_id: \"123456\")` Rules for HTTP mapping 1. Leaf request fields (recursive expansion nested messages in the request message) are classified into three categories: - Fields referred by the path template. They are passed via the URL path. - Fields referred by the HttpRule.body. They are passed via the HTTP request body. - All other fields are passed via the URL query parameters, and the parameter name is the field path in the request message. A repeated field can be represented as multiple query parameters under the same name. 2. If HttpRule.body is \"*\", there is no URL query parameter, all fields are passed via URL path and HTTP request body. 3. If HttpRule.body is omitted, there is no HTTP request body, all fields are passed via URL path and URL query parameters. Path template syntax Template = \"/\" Segments [ Verb ] ; Segments = Segment { \"/\" Segment } ; Segment = \"*\" | \"**\" | LITERAL | Variable ; Variable = \"{\" FieldPath [ \"=\" Segments ] \"}\" ; FieldPath = IDENT { \".\" IDENT } ; Verb = \":\" LITERAL ; The syntax `*` matches a single URL path segment. The syntax `**` matches zero or more URL path segments, which must be the last part of the URL path except the `Verb`. The syntax `Variable` matches part of the URL path as specified by its template. A variable template must not contain other variables. If a variable matches a single path segment, its template may be omitted, e.g. `{var}` is equivalent to `{var=*}`. The syntax `LITERAL` matches literal text in the URL path. If the `LITERAL` contains any reserved character, such characters should be percent-encoded before the matching. If a variable contains exactly one path segment, such as `\"{var}\"` or `\"{var=*}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{var}`. If a variable contains multiple path segments, such as `\"{var=foo/*}\"` or `\"{var=**}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~/0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding, except \"%2F\" and \"%2f\" are left unchanged. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{+var}`. Using gRPC API Service Configuration gRPC API Service Configuration (service config) is a configuration language for configuring a gRPC service to become a user-facing product. The service config is simply the YAML representation of the `google.api.Service` proto message. As an alternative to annotating your proto file, you can configure gRPC transcoding in your service config YAML files. You do this by specifying a `HttpRule` that maps the gRPC method to a REST endpoint, achieving the same effect as the proto annotation. This can be particularly useful if you have a proto that is reused in multiple services. Note that any transcoding specified in the service config will override any matching transcoding configuration in the proto. Example below selects a gRPC method and applies HttpRule to it. http: rules: - selector: example.v1.Messaging.GetMessage get: /v1/messages/{message_id}/{sub.subfield} Special notes When gRPC Transcoding is used to map a gRPC to JSON REST endpoints, the proto to JSON conversion must follow the [proto3 specification](https://developers.google.com/protocol-buffers/docs/proto3#json). While the single segment variable follows the semantics of [RFC 6570](https://tools.ietf.org/html/rfc6570) Section 3.2.2 Simple String Expansion, the multi segment variable **does not** follow RFC 6570 Section 3.2.3 Reserved Expansion. The reason is that the Reserved Expansion does not expand special characters like `?` and `#`, which would lead to invalid URLs. As the result, gRPC Transcoding uses a custom encoding for multi segment variables. The path variables **must not** refer to any repeated or mapped field, because client libraries are not capable of handling such variable expansion. The path variables **must not** capture the leading \"/\" character. The reason is that the most common use case \"{var}\" does not capture the leading \"/\" character. For consistency, all path variables must share the same behavior. Repeated message fields must not be mapped to URL query parameters, because no client library can support such complicated mapping. If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature.", "id": "HttpRule", "properties": { "additionalBindings": { diff --git a/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json b/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json index 0c2e4a24285..6ce3d85ce56 100644 --- a/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json +++ b/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json @@ -307,7 +307,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://servicenetworking.googleapis.com/", "schemas": { "AddDnsRecordSetMetadata": { @@ -1505,7 +1505,7 @@ "type": "object" }, "HttpRule": { -"description": "# gRPC Transcoding gRPC Transcoding is a feature for mapping between a gRPC method and one or more HTTP REST endpoints. It allows developers to build a single API service that supports both gRPC APIs and REST APIs. Many systems, including [Google APIs](https://github.com/googleapis/googleapis), [Cloud Endpoints](https://cloud.google.com/endpoints), [gRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway), and [Envoy](https://github.com/envoyproxy/envoy) proxy support this feature and use it for large scale production services. `HttpRule` defines the schema of the gRPC/REST mapping. The mapping specifies how different portions of the gRPC request message are mapped to the URL path, URL query parameters, and HTTP request body. It also controls how the gRPC response message is mapped to the HTTP response body. `HttpRule` is typically specified as an `google.api.http` annotation on the gRPC method. Each mapping specifies a URL path template and an HTTP method. The path template may refer to one or more fields in the gRPC request message, as long as each field is a non-repeated field with a primitive (non-message) type. The path template controls how fields of the request message are mapped to the URL path. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/{name=messages/*}\" }; } } message GetMessageRequest { string name = 1; // Mapped to URL path. } message Message { string text = 1; // The resource content. } This enables an HTTP REST to gRPC mapping as below: HTTP | gRPC -----|----- `GET /v1/messages/123456` | `GetMessage(name: \"messages/123456\")` Any fields in the request message which are not bound by the path template automatically become HTTP query parameters if there is no HTTP request body. For example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get:\"/v1/messages/{message_id}\" }; } } message GetMessageRequest { message SubMessage { string subfield = 1; } string message_id = 1; // Mapped to URL path. int64 revision = 2; // Mapped to URL query parameter `revision`. SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`. } This enables a HTTP JSON to RPC mapping as below: HTTP | gRPC -----|----- `GET /v1/messages/123456?revision=2&sub.subfield=foo` | `GetMessage(message_id: \"123456\" revision: 2 sub: SubMessage(subfield: \"foo\"))` Note that fields which are mapped to URL query parameters must have a primitive type or a repeated primitive type or a non-repeated message type. In the case of a repeated type, the parameter can be repeated in the URL as `...?param=A¶m=B`. In the case of a message type, each field of the message is mapped to a separate parameter, such as `...?foo.a=A&foo.b=B&foo.c=C`. For HTTP methods that allow a request body, the `body` field specifies the mapping. Consider a REST update method on the message resource collection: service Messaging { rpc UpdateMessage(UpdateMessageRequest) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"message\" }; } } message UpdateMessageRequest { string message_id = 1; // mapped to the URL Message message = 2; // mapped to the body } The following HTTP JSON to RPC mapping is enabled, where the representation of the JSON in the request body is determined by protos JSON encoding: HTTP | gRPC -----|----- `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` | `UpdateMessage(message_id: \"123456\" message { text: \"Hi!\" })` The special name `*` can be used in the body mapping to define that every field not bound by the path template should be mapped to the request body. This enables the following alternative definition of the update method: service Messaging { rpc UpdateMessage(Message) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"*\" }; } } message Message { string message_id = 1; string text = 2; } The following HTTP JSON to RPC mapping is enabled: HTTP | gRPC -----|----- `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` | `UpdateMessage(message_id: \"123456\" text: \"Hi!\")` Note that when using `*` in the body mapping, it is not possible to have HTTP parameters, as all fields not bound by the path end in the body. This makes this option more rarely used in practice when defining REST APIs. The common usage of `*` is in custom methods which don't use the URL at all for transferring data. It is possible to define multiple HTTP methods for one RPC by using the `additional_bindings` option. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/messages/{message_id}\" additional_bindings { get: \"/v1/users/{user_id}/messages/{message_id}\" } }; } } message GetMessageRequest { string message_id = 1; string user_id = 2; } This enables the following two alternative HTTP JSON to RPC mappings: HTTP | gRPC -----|----- `GET /v1/messages/123456` | `GetMessage(message_id: \"123456\")` `GET /v1/users/me/messages/123456` | `GetMessage(user_id: \"me\" message_id: \"123456\")` ## Rules for HTTP mapping 1. Leaf request fields (recursive expansion nested messages in the request message) are classified into three categories: - Fields referred by the path template. They are passed via the URL path. - Fields referred by the HttpRule.body. They are passed via the HTTP request body. - All other fields are passed via the URL query parameters, and the parameter name is the field path in the request message. A repeated field can be represented as multiple query parameters under the same name. 2. If HttpRule.body is \"*\", there is no URL query parameter, all fields are passed via URL path and HTTP request body. 3. If HttpRule.body is omitted, there is no HTTP request body, all fields are passed via URL path and URL query parameters. ### Path template syntax Template = \"/\" Segments [ Verb ] ; Segments = Segment { \"/\" Segment } ; Segment = \"*\" | \"**\" | LITERAL | Variable ; Variable = \"{\" FieldPath [ \"=\" Segments ] \"}\" ; FieldPath = IDENT { \".\" IDENT } ; Verb = \":\" LITERAL ; The syntax `*` matches a single URL path segment. The syntax `**` matches zero or more URL path segments, which must be the last part of the URL path except the `Verb`. The syntax `Variable` matches part of the URL path as specified by its template. A variable template must not contain other variables. If a variable matches a single path segment, its template may be omitted, e.g. `{var}` is equivalent to `{var=*}`. The syntax `LITERAL` matches literal text in the URL path. If the `LITERAL` contains any reserved character, such characters should be percent-encoded before the matching. If a variable contains exactly one path segment, such as `\"{var}\"` or `\"{var=*}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{var}`. If a variable contains multiple path segments, such as `\"{var=foo/*}\"` or `\"{var=**}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~/0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding, except \"%2F\" and \"%2f\" are left unchanged. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{+var}`. ## Using gRPC API Service Configuration gRPC API Service Configuration (service config) is a configuration language for configuring a gRPC service to become a user-facing product. The service config is simply the YAML representation of the `google.api.Service` proto message. As an alternative to annotating your proto file, you can configure gRPC transcoding in your service config YAML files. You do this by specifying a `HttpRule` that maps the gRPC method to a REST endpoint, achieving the same effect as the proto annotation. This can be particularly useful if you have a proto that is reused in multiple services. Note that any transcoding specified in the service config will override any matching transcoding configuration in the proto. Example: http: rules: # Selects a gRPC method and applies HttpRule to it. - selector: example.v1.Messaging.GetMessage get: /v1/messages/{message_id}/{sub.subfield} ## Special notes When gRPC Transcoding is used to map a gRPC to JSON REST endpoints, the proto to JSON conversion must follow the [proto3 specification](https://developers.google.com/protocol-buffers/docs/proto3#json). While the single segment variable follows the semantics of [RFC 6570](https://tools.ietf.org/html/rfc6570) Section 3.2.2 Simple String Expansion, the multi segment variable **does not** follow RFC 6570 Section 3.2.3 Reserved Expansion. The reason is that the Reserved Expansion does not expand special characters like `?` and `#`, which would lead to invalid URLs. As the result, gRPC Transcoding uses a custom encoding for multi segment variables. The path variables **must not** refer to any repeated or mapped field, because client libraries are not capable of handling such variable expansion. The path variables **must not** capture the leading \"/\" character. The reason is that the most common use case \"{var}\" does not capture the leading \"/\" character. For consistency, all path variables must share the same behavior. Repeated message fields must not be mapped to URL query parameters, because no client library can support such complicated mapping. If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature.", +"description": "gRPC Transcoding gRPC Transcoding is a feature for mapping between a gRPC method and one or more HTTP REST endpoints. It allows developers to build a single API service that supports both gRPC APIs and REST APIs. Many systems, including [Google APIs](https://github.com/googleapis/googleapis), [Cloud Endpoints](https://cloud.google.com/endpoints), [gRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway), and [Envoy](https://github.com/envoyproxy/envoy) proxy support this feature and use it for large scale production services. `HttpRule` defines the schema of the gRPC/REST mapping. The mapping specifies how different portions of the gRPC request message are mapped to the URL path, URL query parameters, and HTTP request body. It also controls how the gRPC response message is mapped to the HTTP response body. `HttpRule` is typically specified as an `google.api.http` annotation on the gRPC method. Each mapping specifies a URL path template and an HTTP method. The path template may refer to one or more fields in the gRPC request message, as long as each field is a non-repeated field with a primitive (non-message) type. The path template controls how fields of the request message are mapped to the URL path. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/{name=messages/*}\" }; } } message GetMessageRequest { string name = 1; // Mapped to URL path. } message Message { string text = 1; // The resource content. } This enables an HTTP REST to gRPC mapping as below: - HTTP: `GET /v1/messages/123456` - gRPC: `GetMessage(name: \"messages/123456\")` Any fields in the request message which are not bound by the path template automatically become HTTP query parameters if there is no HTTP request body. For example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get:\"/v1/messages/{message_id}\" }; } } message GetMessageRequest { message SubMessage { string subfield = 1; } string message_id = 1; // Mapped to URL path. int64 revision = 2; // Mapped to URL query parameter `revision`. SubMessage sub = 3; // Mapped to URL query parameter `sub.subfield`. } This enables a HTTP JSON to RPC mapping as below: - HTTP: `GET /v1/messages/123456?revision=2&sub.subfield=foo` - gRPC: `GetMessage(message_id: \"123456\" revision: 2 sub: SubMessage(subfield: \"foo\"))` Note that fields which are mapped to URL query parameters must have a primitive type or a repeated primitive type or a non-repeated message type. In the case of a repeated type, the parameter can be repeated in the URL as `...?param=A¶m=B`. In the case of a message type, each field of the message is mapped to a separate parameter, such as `...?foo.a=A&foo.b=B&foo.c=C`. For HTTP methods that allow a request body, the `body` field specifies the mapping. Consider a REST update method on the message resource collection: service Messaging { rpc UpdateMessage(UpdateMessageRequest) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"message\" }; } } message UpdateMessageRequest { string message_id = 1; // mapped to the URL Message message = 2; // mapped to the body } The following HTTP JSON to RPC mapping is enabled, where the representation of the JSON in the request body is determined by protos JSON encoding: - HTTP: `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` - gRPC: `UpdateMessage(message_id: \"123456\" message { text: \"Hi!\" })` The special name `*` can be used in the body mapping to define that every field not bound by the path template should be mapped to the request body. This enables the following alternative definition of the update method: service Messaging { rpc UpdateMessage(Message) returns (Message) { option (google.api.http) = { patch: \"/v1/messages/{message_id}\" body: \"*\" }; } } message Message { string message_id = 1; string text = 2; } The following HTTP JSON to RPC mapping is enabled: - HTTP: `PATCH /v1/messages/123456 { \"text\": \"Hi!\" }` - gRPC: `UpdateMessage(message_id: \"123456\" text: \"Hi!\")` Note that when using `*` in the body mapping, it is not possible to have HTTP parameters, as all fields not bound by the path end in the body. This makes this option more rarely used in practice when defining REST APIs. The common usage of `*` is in custom methods which don't use the URL at all for transferring data. It is possible to define multiple HTTP methods for one RPC by using the `additional_bindings` option. Example: service Messaging { rpc GetMessage(GetMessageRequest) returns (Message) { option (google.api.http) = { get: \"/v1/messages/{message_id}\" additional_bindings { get: \"/v1/users/{user_id}/messages/{message_id}\" } }; } } message GetMessageRequest { string message_id = 1; string user_id = 2; } This enables the following two alternative HTTP JSON to RPC mappings: - HTTP: `GET /v1/messages/123456` - gRPC: `GetMessage(message_id: \"123456\")` - HTTP: `GET /v1/users/me/messages/123456` - gRPC: `GetMessage(user_id: \"me\" message_id: \"123456\")` Rules for HTTP mapping 1. Leaf request fields (recursive expansion nested messages in the request message) are classified into three categories: - Fields referred by the path template. They are passed via the URL path. - Fields referred by the HttpRule.body. They are passed via the HTTP request body. - All other fields are passed via the URL query parameters, and the parameter name is the field path in the request message. A repeated field can be represented as multiple query parameters under the same name. 2. If HttpRule.body is \"*\", there is no URL query parameter, all fields are passed via URL path and HTTP request body. 3. If HttpRule.body is omitted, there is no HTTP request body, all fields are passed via URL path and URL query parameters. Path template syntax Template = \"/\" Segments [ Verb ] ; Segments = Segment { \"/\" Segment } ; Segment = \"*\" | \"**\" | LITERAL | Variable ; Variable = \"{\" FieldPath [ \"=\" Segments ] \"}\" ; FieldPath = IDENT { \".\" IDENT } ; Verb = \":\" LITERAL ; The syntax `*` matches a single URL path segment. The syntax `**` matches zero or more URL path segments, which must be the last part of the URL path except the `Verb`. The syntax `Variable` matches part of the URL path as specified by its template. A variable template must not contain other variables. If a variable matches a single path segment, its template may be omitted, e.g. `{var}` is equivalent to `{var=*}`. The syntax `LITERAL` matches literal text in the URL path. If the `LITERAL` contains any reserved character, such characters should be percent-encoded before the matching. If a variable contains exactly one path segment, such as `\"{var}\"` or `\"{var=*}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{var}`. If a variable contains multiple path segments, such as `\"{var=foo/*}\"` or `\"{var=**}\"`, when such a variable is expanded into a URL path on the client side, all characters except `[-_.~/0-9a-zA-Z]` are percent-encoded. The server side does the reverse decoding, except \"%2F\" and \"%2f\" are left unchanged. Such variables show up in the [Discovery Document](https://developers.google.com/discovery/v1/reference/apis) as `{+var}`. Using gRPC API Service Configuration gRPC API Service Configuration (service config) is a configuration language for configuring a gRPC service to become a user-facing product. The service config is simply the YAML representation of the `google.api.Service` proto message. As an alternative to annotating your proto file, you can configure gRPC transcoding in your service config YAML files. You do this by specifying a `HttpRule` that maps the gRPC method to a REST endpoint, achieving the same effect as the proto annotation. This can be particularly useful if you have a proto that is reused in multiple services. Note that any transcoding specified in the service config will override any matching transcoding configuration in the proto. Example below selects a gRPC method and applies HttpRule to it. http: rules: - selector: example.v1.Messaging.GetMessage get: /v1/messages/{message_id}/{sub.subfield} Special notes When gRPC Transcoding is used to map a gRPC to JSON REST endpoints, the proto to JSON conversion must follow the [proto3 specification](https://developers.google.com/protocol-buffers/docs/proto3#json). While the single segment variable follows the semantics of [RFC 6570](https://tools.ietf.org/html/rfc6570) Section 3.2.2 Simple String Expansion, the multi segment variable **does not** follow RFC 6570 Section 3.2.3 Reserved Expansion. The reason is that the Reserved Expansion does not expand special characters like `?` and `#`, which would lead to invalid URLs. As the result, gRPC Transcoding uses a custom encoding for multi segment variables. The path variables **must not** refer to any repeated or mapped field, because client libraries are not capable of handling such variable expansion. The path variables **must not** capture the leading \"/\" character. The reason is that the most common use case \"{var}\" does not capture the leading \"/\" character. For consistency, all path variables must share the same behavior. Repeated message fields must not be mapped to URL query parameters, because no client library can support such complicated mapping. If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature.", "id": "HttpRule", "properties": { "additionalBindings": { diff --git a/googleapiclient/discovery_cache/documents/sheets.v4.json b/googleapiclient/discovery_cache/documents/sheets.v4.json index df8719de481..b15f998a31b 100644 --- a/googleapiclient/discovery_cache/documents/sheets.v4.json +++ b/googleapiclient/discovery_cache/documents/sheets.v4.json @@ -870,7 +870,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://sheets.googleapis.com/", "schemas": { "AddBandingRequest": { diff --git a/googleapiclient/discovery_cache/documents/slides.v1.json b/googleapiclient/discovery_cache/documents/slides.v1.json index 8319c3ab049..4d184f07924 100644 --- a/googleapiclient/discovery_cache/documents/slides.v1.json +++ b/googleapiclient/discovery_cache/documents/slides.v1.json @@ -313,7 +313,7 @@ } } }, -"revision": "20240514", +"revision": "20240528", "rootUrl": "https://slides.googleapis.com/", "schemas": { "AffineTransform": { diff --git a/googleapiclient/discovery_cache/documents/solar.v1.json b/googleapiclient/discovery_cache/documents/solar.v1.json index f46205ba2d9..b42c2f5f92c 100644 --- a/googleapiclient/discovery_cache/documents/solar.v1.json +++ b/googleapiclient/discovery_cache/documents/solar.v1.json @@ -267,7 +267,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://solar.googleapis.com/", "schemas": { "BuildingInsights": { diff --git a/googleapiclient/discovery_cache/documents/spanner.v1.json b/googleapiclient/discovery_cache/documents/spanner.v1.json index 2bddb8dc90d..412c3e6d30d 100644 --- a/googleapiclient/discovery_cache/documents/spanner.v1.json +++ b/googleapiclient/discovery_cache/documents/spanner.v1.json @@ -1396,6 +1396,35 @@ }, "databases": { "methods": { +"changequorum": { +"description": "ChangeQuorum is strictly restricted to databases that use dual region instance configurations. Initiates a background operation to change quorum a database from dual-region mode to single-region mode and vice versa. The returned long-running operation will have a name of the format `projects//instances//databases//operations/` and can be used to track execution of the ChangeQuorum. The metadata field type is ChangeQuorumMetadata. Authorization requires `spanner.databases.changequorum` permission on the resource database.", +"flatPath": "v1/projects/{projectsId}/instances/{instancesId}/databases/{databasesId}:changequorum", +"httpMethod": "POST", +"id": "spanner.projects.instances.databases.changequorum", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Name of the database in which to apply the ChangeQuorum. Values are of the form `projects//instances//databases/`.", +"location": "path", +"pattern": "^projects/[^/]+/instances/[^/]+/databases/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:changequorum", +"request": { +"$ref": "ChangeQuorumRequest" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/spanner.admin" +] +}, "create": { "description": "Creates a new Cloud Spanner database and starts to prepare it for serving. The returned long-running operation will have a name of the format `/operations/` and can be used to track preparation of the database. The metadata field type is CreateDatabaseMetadata. The response field type is Database, if successful.", "flatPath": "v1/projects/{projectsId}/instances/{instancesId}/databases", @@ -2613,7 +2642,7 @@ "type": "string" }, "parent": { -"description": "Required. The instance whose instance partitions should be listed. Values are of the form `projects//instances/`.", +"description": "Required. The instance whose instance partitions should be listed. Values are of the form `projects//instances/`. Use `{instance} = '-'` to list instance partitions for all Instances in a project, e.g., `projects/myproject/instances/-`.", "location": "path", "pattern": "^projects/[^/]+/instances/[^/]+$", "required": true, @@ -2976,7 +3005,7 @@ } } }, -"revision": "20240423", +"revision": "20240529", "rootUrl": "https://spanner.googleapis.com/", "schemas": { "AutoscalingConfig": { @@ -3278,6 +3307,46 @@ }, "type": "object" }, +"ChangeQuorumMetadata": { +"description": "Metadata type for the long-running operation returned by ChangeQuorum.", +"id": "ChangeQuorumMetadata", +"properties": { +"endTime": { +"description": "If set, the time at which this operation failed or was completed successfully.", +"format": "google-datetime", +"type": "string" +}, +"request": { +"$ref": "ChangeQuorumRequest", +"description": "The request for ChangeQuorum." +}, +"startTime": { +"description": "Time the request was received.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, +"ChangeQuorumRequest": { +"description": "The request for ChangeQuorum.", +"id": "ChangeQuorumRequest", +"properties": { +"etag": { +"description": "Optional. The etag is the hash of the QuorumInfo. The ChangeQuorum operation will only be performed if the etag matches that of the QuorumInfo in the current database resource. Otherwise the API will return an `ABORTED` error. The etag is used for optimistic concurrency control as a way to help prevent simultaneous change quorum requests that could create a race condition.", +"type": "string" +}, +"name": { +"description": "Required. Name of the database in which to apply the ChangeQuorum. Values are of the form `projects//instances//databases/`.", +"type": "string" +}, +"quorumType": { +"$ref": "QuorumType", +"description": "Required. The type of this Quorum." +} +}, +"type": "object" +}, "ChildLink": { "description": "Metadata associated with a parent-child relationship appearing in a PlanNode.", "id": "ChildLink", @@ -3761,6 +3830,11 @@ "description": "Required. The name of the database. Values are of the form `projects//instances//databases/`, where `` is as specified in the `CREATE DATABASE` statement. This name can be passed to other API methods to identify the database.", "type": "string" }, +"quorumInfo": { +"$ref": "QuorumInfo", +"description": "Output only. Applicable only for databases that use dual region instance configurations. Contains information about the quorum.", +"readOnly": true +}, "reconciling": { "description": "Output only. If true, the database is being updated. If false, there are no ongoing update operations for the database.", "readOnly": true, @@ -3915,6 +3989,12 @@ }, "type": "object" }, +"DualRegionQuorum": { +"description": "Message type for a dual-region quorum. Currently this type has no options.", +"id": "DualRegionQuorum", +"properties": {}, +"type": "object" +}, "Empty": { "description": "A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }", "id": "Empty", @@ -4343,12 +4423,12 @@ "type": "string" }, "nodeCount": { -"description": "The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units.", +"description": "The number of nodes allocated to this instance. At most one of either node_count or processing_units should be present in the message. Users can set the node_count field to specify the target number of nodes allocated to the instance. If autoscaling is enabled, node_count is treated as an OUTPUT_ONLY field and reflects the current number of nodes allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units.", "format": "int32", "type": "integer" }, "processingUnits": { -"description": "The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units.", +"description": "The number of processing units allocated to this instance. At most one of processing_units or node_count should be present in the message. Users can set the processing_units field to specify the target number of processing units allocated to the instance. If autoscaling is enabled, processing_units is treated as an OUTPUT_ONLY field and reflects the current number of processing units allocated to the instance. This may be zero in API responses for instances that are not yet in state `READY`. See [the documentation](https://cloud.google.com/spanner/docs/compute-capacity) for more information about nodes and processing units.", "format": "int32", "type": "integer" }, @@ -4452,6 +4532,23 @@ "readOnly": true, "type": "array" }, +"quorumType": { +"description": "Output only. The `QuorumType` of the instance configuration.", +"enum": [ +"QUORUM_TYPE_UNSPECIFIED", +"REGION", +"DUAL_REGION", +"MULTI_REGION" +], +"enumDescriptions": [ +"Not specified.", +"An instance configuration tagged with REGION quorum type forms a write quorum in a single region.", +"An instance configuration tagged with DUAL_REGION quorum type forms a write quorums with exactly two read-write regions in a multi-region configuration. This instance configurations requires reconfiguration in the event of regional failures.", +"An instance configuration tagged with MULTI_REGION quorum type forms a write quorums from replicas are spread across more than one region in a multi-region configuration." +], +"readOnly": true, +"type": "string" +}, "reconciling": { "description": "Output only. If true, the instance config is being created or updated. If false, there are no ongoing operations for the instance config.", "readOnly": true, @@ -4886,7 +4983,7 @@ "type": "string" }, "unreachable": { -"description": "The list of unreachable instance partitions. It includes the names of instance partitions whose metadata could not be retrieved within instance_partition_deadline.", +"description": "The list of unreachable instances or instance partitions. It includes the names of instances or instance partitions whose metadata could not be retrieved within instance_partition_deadline.", "items": { "type": "string" }, @@ -5531,6 +5628,59 @@ }, "type": "object" }, +"QuorumInfo": { +"description": "Information about the dual region quorum.", +"id": "QuorumInfo", +"properties": { +"etag": { +"description": "Output only. The etag is used for optimistic concurrency control as a way to help prevent simultaneous ChangeQuorum requests that could create a race condition.", +"readOnly": true, +"type": "string" +}, +"initiator": { +"description": "Output only. Whether this ChangeQuorum is a Google or User initiated.", +"enum": [ +"INITIATOR_UNSPECIFIED", +"GOOGLE", +"USER" +], +"enumDescriptions": [ +"Unspecified.", +"ChangeQuorum initiated by Google.", +"ChangeQuorum initiated by User." +], +"readOnly": true, +"type": "string" +}, +"quorumType": { +"$ref": "QuorumType", +"description": "Output only. The type of this quorum. See QuorumType for more information about quorum type specifications.", +"readOnly": true +}, +"startTime": { +"description": "Output only. The timestamp when the request was triggered.", +"format": "google-datetime", +"readOnly": true, +"type": "string" +} +}, +"type": "object" +}, +"QuorumType": { +"description": "Information about the database quorum type. this applies only for dual region instance configs.", +"id": "QuorumType", +"properties": { +"dualRegion": { +"$ref": "DualRegionQuorum", +"description": "Dual region quorum type." +}, +"singleRegion": { +"$ref": "SingleRegionQuorum", +"description": "Single region quorum type." +} +}, +"type": "object" +}, "ReadOnly": { "description": "Message type to initiate a read-only transaction.", "id": "ReadOnly", @@ -6055,6 +6205,17 @@ }, "type": "object" }, +"SingleRegionQuorum": { +"description": "Message type for a single-region quorum.", +"id": "SingleRegionQuorum", +"properties": { +"servingLocation": { +"description": "Required. The location of the serving region, e.g. \"us-central1\". The location must be one of the regions within the dual region instance configuration of your database. The list of valid locations is available via [GetInstanceConfig[InstanceAdmin.GetInstanceConfig] API. This should only be used if you plan to change quorum in single-region quorum type.", +"type": "string" +} +}, +"type": "object" +}, "Statement": { "description": "A single DML statement.", "id": "Statement", @@ -6168,7 +6329,7 @@ "type": "object" }, "TransactionOptions": { -"description": "Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. Please see TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a \"negotiation phase\" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as \"version GC\". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.", +"description": "Transactions: Each session can have at most one active transaction at a time (note that standalone reads and queries use a transaction internally and do count towards the one transaction limit). After the active transaction is completed, the session can immediately be re-used for the next transaction. It is not necessary to create a new session for each transaction. Transaction modes: Cloud Spanner supports three transaction modes: 1. Locking read-write. This type of transaction is the only way to write data into Cloud Spanner. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. 2. Snapshot read-only. Snapshot read-only transactions provide guaranteed consistency across several reads, but do not allow writes. Snapshot read-only transactions can be configured to read at timestamps in the past, or configured to perform a strong read (where Spanner will select a timestamp such that the read is guaranteed to see the effects of all transactions that have committed before the start of the read). Snapshot read-only transactions do not need to be committed. Queries on change streams must be performed with the snapshot read-only transaction mode, specifying a strong read. See TransactionOptions.ReadOnly.strong for more details. 3. Partitioned DML. This type of transaction is used to execute a single Partitioned DML statement. Partitioned DML partitions the key space and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster. In particular, read-only transactions do not take locks, so they do not conflict with read-write transactions. As a consequence of not taking locks, they also do not abort, so retry loops are not needed. Transactions may only read-write data in a single database. They may, however, read-write data in different tables within that database. Locking read-write transactions: Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent. Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads, and the transaction has not been terminated by Commit or Rollback. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it. Conceptually, a read-write transaction consists of zero or more reads or SQL statements followed by Commit. At any time before Commit, the client can send a Rollback request to abort the transaction. Semantics: Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner. Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves. Retrying aborted transactions: When a transaction aborts, the application can choose to retry the whole transaction again. To maximize the chances of successfully committing the retry, the client should execute the retry in the same session as the original attempt. The original session's lock priority increases with each consecutive abort, meaning that each attempt has a slightly better chance of success than the previous. Note that the lock priority is preserved per session (not per transaction). Lock priority is set by the first read or write in the first attempt of a read-write transaction. If the application starts a new session to retry the whole transaction, the transaction loses its original lock priority. Moreover, the lock priority is only preserved if the transaction fails with an `ABORTED` error. Under some circumstances (for example, many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of time spent retrying. Idle transactions: A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. If an idle transaction is aborted, the commit will fail with error `ABORTED`. If this behavior is undesirable, periodically executing a simple SQL query in the transaction (for example, `SELECT 1`) prevents the transaction from becoming idle. Snapshot read-only transactions: Snapshot read-only transactions provides a simpler method than locking read-write transactions for doing several consistent reads. However, this type of transaction does not support writes. Snapshot transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions. Unlike locking read-write transactions, snapshot read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. Snapshot read-only transactions do not need to call Commit or Rollback (and in fact are not permitted to do so). To execute a snapshot transaction, the client specifies a timestamp bound, which tells Cloud Spanner how to choose a read timestamp. The types of timestamp bound are: - Strong (the default). - Bounded staleness. - Exact staleness. If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica. Each type of timestamp bound is discussed in detail below. Strong: Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp. Queries on change streams (see below for more details) must also specify the strong read timestamp bound. See TransactionOptions.ReadOnly.strong. Exact staleness: These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps <= the read timestamp have finished. The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time. These modes do not require a \"negotiation phase\" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results. See TransactionOptions.ReadOnly.read_timestamp and TransactionOptions.ReadOnly.exact_staleness. Bounded staleness: Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking. All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results. Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp. As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica. Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use read-only transactions. See TransactionOptions.ReadOnly.max_staleness and TransactionOptions.ReadOnly.min_read_timestamp. Old read timestamps and garbage collection: Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as \"version GC\". By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than one hour in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error `FAILED_PRECONDITION`. You can configure and extend the `VERSION_RETENTION_PERIOD` of a database up to a period as long as one week, which allows Cloud Spanner to perform reads up to one week in the past. Querying change Streams: A Change Stream is a schema object that can be configured to watch data changes on the entire database, a set of tables, or a set of columns in a database. When a change stream is created, Spanner automatically defines a corresponding SQL Table-Valued Function (TVF) that can be used to query the change records in the associated change stream using the ExecuteStreamingSql API. The name of the TVF for a change stream is generated from the name of the change stream: READ_. All queries on change stream TVFs must be executed using the ExecuteStreamingSql API with a single-use read-only transaction with a strong read-only timestamp_bound. The change stream TVF allows users to specify the start_timestamp and end_timestamp for the time range of interest. All change records within the retention period is accessible using the strong read-only timestamp_bound. All other TransactionOptions are invalid for change stream queries. In addition, if TransactionOptions.read_only.return_read_timestamp is set to true, a special value of 2^63 - 2 will be returned in the Transaction message that describes the transaction, instead of a valid read timestamp. This special value should be discarded and not used for any subsequent queries. Please see https://cloud.google.com/spanner/docs/change-streams for more details on how to query the change stream TVFs. Partitioned DML transactions: Partitioned DML transactions are used to execute DML statements with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, should prefer using ReadWrite transactions. Partitioned DML partitions the keyspace and runs the DML statement on each partition in separate, internal transactions. These transactions commit automatically when complete, and run independently from one another. To reduce lock contention, this execution strategy only acquires read locks on rows that match the WHERE clause of the statement. Additionally, the smaller per-partition transactions hold locks for less time. That said, Partitioned DML is not a drop-in replacement for standard DML used in ReadWrite transactions. - The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. - The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent transactions. Secondary index rows are updated atomically with the base table rows. - Partitioned DML does not guarantee exactly-once execution semantics against a partition. The statement is applied at least once to each partition. It is strongly recommended that the DML statement should be idempotent to avoid unexpected results. For instance, it is potentially dangerous to run a statement such as `UPDATE table SET column = column + 1` as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the ExecuteSql call dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows. - Partitioned DML transactions may only contain the execution of a single DML statement via ExecuteSql or ExecuteStreamingSql. - If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all. Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.", "id": "TransactionOptions", "properties": { "excludeTxnFromChangeStreams": { diff --git a/googleapiclient/discovery_cache/documents/speech.v1.json b/googleapiclient/discovery_cache/documents/speech.v1.json index 32cf849a540..b8176b0f326 100644 --- a/googleapiclient/discovery_cache/documents/speech.v1.json +++ b/googleapiclient/discovery_cache/documents/speech.v1.json @@ -524,7 +524,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://speech.googleapis.com/", "schemas": { "ABNFGrammar": { diff --git a/googleapiclient/discovery_cache/documents/speech.v1p1beta1.json b/googleapiclient/discovery_cache/documents/speech.v1p1beta1.json index 3c87886f9f9..702490cb621 100644 --- a/googleapiclient/discovery_cache/documents/speech.v1p1beta1.json +++ b/googleapiclient/discovery_cache/documents/speech.v1p1beta1.json @@ -524,7 +524,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://speech.googleapis.com/", "schemas": { "ABNFGrammar": { diff --git a/googleapiclient/discovery_cache/documents/sqladmin.v1.json b/googleapiclient/discovery_cache/documents/sqladmin.v1.json index b2cccd374a6..8086ec8b4a5 100644 --- a/googleapiclient/discovery_cache/documents/sqladmin.v1.json +++ b/googleapiclient/discovery_cache/documents/sqladmin.v1.json @@ -2267,7 +2267,7 @@ } } }, -"revision": "20240521", +"revision": "20240529", "rootUrl": "https://sqladmin.googleapis.com/", "schemas": { "AclEntry": { @@ -5543,7 +5543,10 @@ true "PG_SUBSCRIPTION_COUNT", "PG_SYNC_PARALLEL_LEVEL", "INSUFFICIENT_DISK_SIZE", -"INSUFFICIENT_MACHINE_TIER" +"INSUFFICIENT_MACHINE_TIER", +"UNSUPPORTED_EXTENSIONS_NOT_MIGRATED", +"EXTENSIONS_NOT_MIGRATED", +"PG_CRON_FLAG_ENABLED_IN_REPLICA" ], "enumDescriptions": [ "", @@ -5590,7 +5593,10 @@ true "Count of subscriptions needed to sync source data for PostgreSQL database.", "Final parallel level that is used to do migration.", "The disk size of the replica instance is smaller than the data size of the source instance.", -"The data size of the source instance is greater than 1 TB, the number of cores of the replica instance is less than 8, and the memory of the replica is less than 32 GB." +"The data size of the source instance is greater than 1 TB, the number of cores of the replica instance is less than 8, and the memory of the replica is less than 32 GB.", +"The warning message indicates the unsupported extensions will not be migrated to the destination.", +"The warning message indicates the pg_cron extension and settings will not be migrated to the destination.", +"The error message indicates that pg_cron flags are enabled on the destination which is not supported during the migration." ], "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json b/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json index 15f572fe800..4fb18648910 100644 --- a/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json +++ b/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json @@ -2267,7 +2267,7 @@ } } }, -"revision": "20240521", +"revision": "20240529", "rootUrl": "https://sqladmin.googleapis.com/", "schemas": { "AclEntry": { @@ -5548,7 +5548,10 @@ true "PG_SUBSCRIPTION_COUNT", "PG_SYNC_PARALLEL_LEVEL", "INSUFFICIENT_DISK_SIZE", -"INSUFFICIENT_MACHINE_TIER" +"INSUFFICIENT_MACHINE_TIER", +"UNSUPPORTED_EXTENSIONS_NOT_MIGRATED", +"EXTENSIONS_NOT_MIGRATED", +"PG_CRON_FLAG_ENABLED_IN_REPLICA" ], "enumDescriptions": [ "", @@ -5595,7 +5598,10 @@ true "Count of subscriptions needed to sync source data for PostgreSQL database.", "Final parallel level that is used to do migration.", "The disk size of the replica instance is smaller than the data size of the source instance.", -"The data size of the source instance is greater than 1 TB, the number of cores of the replica instance is less than 8, and the memory of the replica is less than 32 GB." +"The data size of the source instance is greater than 1 TB, the number of cores of the replica instance is less than 8, and the memory of the replica is less than 32 GB.", +"The warning message indicates the unsupported extensions will not be migrated to the destination.", +"The warning message indicates the pg_cron extension and settings will not be migrated to the destination.", +"The error message indicates that pg_cron flags are enabled on the destination which is not supported during the migration." ], "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/storage.v1.json b/googleapiclient/discovery_cache/documents/storage.v1.json index 3971984927e..d4e63518d20 100644 --- a/googleapiclient/discovery_cache/documents/storage.v1.json +++ b/googleapiclient/discovery_cache/documents/storage.v1.json @@ -33,7 +33,7 @@ "location": "me-central2" } ], -"etag": "\"3132383134303835313436343635393933303731\"", +"etag": "\"3131333631343030313731353833323230393337\"", "icons": { "x16": "https://www.google.com/images/icons/product/cloud_storage-16.png", "x32": "https://www.google.com/images/icons/product/cloud_storage-32.png" @@ -4075,7 +4075,7 @@ } } }, -"revision": "20240524", +"revision": "20240528", "rootUrl": "https://storage.googleapis.com/", "schemas": { "AnywhereCache": { diff --git a/googleapiclient/discovery_cache/documents/storagetransfer.v1.json b/googleapiclient/discovery_cache/documents/storagetransfer.v1.json index 89a4f737b80..8b55d09ab43 100644 --- a/googleapiclient/discovery_cache/documents/storagetransfer.v1.json +++ b/googleapiclient/discovery_cache/documents/storagetransfer.v1.json @@ -632,7 +632,7 @@ } } }, -"revision": "20240518", +"revision": "20240525", "rootUrl": "https://storagetransfer.googleapis.com/", "schemas": { "AgentPool": { diff --git a/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json b/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json index 2228f62d4d9..4566fc78880 100644 --- a/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json +++ b/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json @@ -534,7 +534,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://streetviewpublish.googleapis.com/", "schemas": { "BatchDeletePhotosRequest": { diff --git a/googleapiclient/discovery_cache/documents/sts.v1.json b/googleapiclient/discovery_cache/documents/sts.v1.json index 0a874315cb6..8db4011d71c 100644 --- a/googleapiclient/discovery_cache/documents/sts.v1.json +++ b/googleapiclient/discovery_cache/documents/sts.v1.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://sts.googleapis.com/", "schemas": { "GoogleIamV1Binding": { diff --git a/googleapiclient/discovery_cache/documents/sts.v1beta.json b/googleapiclient/discovery_cache/documents/sts.v1beta.json index 1811b0f29f6..436b9c9a2bf 100644 --- a/googleapiclient/discovery_cache/documents/sts.v1beta.json +++ b/googleapiclient/discovery_cache/documents/sts.v1beta.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240520", +"revision": "20240523", "rootUrl": "https://sts.googleapis.com/", "schemas": { "GoogleIamV1Binding": { diff --git a/googleapiclient/discovery_cache/documents/tagmanager.v1.json b/googleapiclient/discovery_cache/documents/tagmanager.v1.json index d4561d5522c..9627e8c164c 100644 --- a/googleapiclient/discovery_cache/documents/tagmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/tagmanager.v1.json @@ -1932,7 +1932,7 @@ } } }, -"revision": "20240522", +"revision": "20240531", "rootUrl": "https://tagmanager.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/tagmanager.v2.json b/googleapiclient/discovery_cache/documents/tagmanager.v2.json index 9c714ac7900..272b73d8bf5 100644 --- a/googleapiclient/discovery_cache/documents/tagmanager.v2.json +++ b/googleapiclient/discovery_cache/documents/tagmanager.v2.json @@ -3890,7 +3890,7 @@ } } }, -"revision": "20240522", +"revision": "20240531", "rootUrl": "https://tagmanager.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/tasks.v1.json b/googleapiclient/discovery_cache/documents/tasks.v1.json index 0df16ed541f..15fde63a434 100644 --- a/googleapiclient/discovery_cache/documents/tasks.v1.json +++ b/googleapiclient/discovery_cache/documents/tasks.v1.json @@ -566,7 +566,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://tasks.googleapis.com/", "schemas": { "Task": { diff --git a/googleapiclient/discovery_cache/documents/testing.v1.json b/googleapiclient/discovery_cache/documents/testing.v1.json index bf0d1c9ac39..37f11dd5a93 100644 --- a/googleapiclient/discovery_cache/documents/testing.v1.json +++ b/googleapiclient/discovery_cache/documents/testing.v1.json @@ -449,7 +449,7 @@ } } }, -"revision": "20240521", +"revision": "20240530", "rootUrl": "https://testing.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json b/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json index 027ea83a414..2cae6f4470b 100644 --- a/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json +++ b/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json @@ -1463,7 +1463,7 @@ } } }, -"revision": "20240527", +"revision": "20240529", "rootUrl": "https://toolresults.googleapis.com/", "schemas": { "ANR": { diff --git a/googleapiclient/discovery_cache/documents/tpu.v1.json b/googleapiclient/discovery_cache/documents/tpu.v1.json index f9f12311821..94b66ee02d5 100644 --- a/googleapiclient/discovery_cache/documents/tpu.v1.json +++ b/googleapiclient/discovery_cache/documents/tpu.v1.json @@ -659,7 +659,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://tpu.googleapis.com/", "schemas": { "AcceleratorType": { diff --git a/googleapiclient/discovery_cache/documents/tpu.v1alpha1.json b/googleapiclient/discovery_cache/documents/tpu.v1alpha1.json index dc3d805fd49..5fd9ab8db8c 100644 --- a/googleapiclient/discovery_cache/documents/tpu.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/tpu.v1alpha1.json @@ -669,7 +669,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://tpu.googleapis.com/", "schemas": { "AcceleratorType": { diff --git a/googleapiclient/discovery_cache/documents/tpu.v2.json b/googleapiclient/discovery_cache/documents/tpu.v2.json index 56d64f0f4d5..44925504c0f 100644 --- a/googleapiclient/discovery_cache/documents/tpu.v2.json +++ b/googleapiclient/discovery_cache/documents/tpu.v2.json @@ -887,7 +887,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://tpu.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/tpu.v2alpha1.json b/googleapiclient/discovery_cache/documents/tpu.v2alpha1.json index c11d9f0c87b..a298d9b49d1 100644 --- a/googleapiclient/discovery_cache/documents/tpu.v2alpha1.json +++ b/googleapiclient/discovery_cache/documents/tpu.v2alpha1.json @@ -965,7 +965,7 @@ } } }, -"revision": "20240519", +"revision": "20240528", "rootUrl": "https://tpu.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json b/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json index a703012d6e4..24cfe552492 100644 --- a/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json +++ b/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240523", +"revision": "20240602", "rootUrl": "https://travelimpactmodel.googleapis.com/", "schemas": { "ComputeFlightEmissionsRequest": { diff --git a/googleapiclient/discovery_cache/documents/vault.v1.json b/googleapiclient/discovery_cache/documents/vault.v1.json index 6fe67324bfd..03dc9dcf5dd 100644 --- a/googleapiclient/discovery_cache/documents/vault.v1.json +++ b/googleapiclient/discovery_cache/documents/vault.v1.json @@ -1203,7 +1203,7 @@ } } }, -"revision": "20240510", +"revision": "20240530", "rootUrl": "https://vault.googleapis.com/", "schemas": { "AccountCount": { diff --git a/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json b/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json index 000ea6e58a5..a429d39d6ab 100644 --- a/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json +++ b/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240514", +"revision": "20240531", "rootUrl": "https://verifiedaccess.googleapis.com/", "schemas": { "Challenge": { diff --git a/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json b/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json index a7808027a83..84edd0ce52c 100644 --- a/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json +++ b/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240514", +"revision": "20240531", "rootUrl": "https://verifiedaccess.googleapis.com/", "schemas": { "Challenge": { diff --git a/googleapiclient/discovery_cache/documents/versionhistory.v1.json b/googleapiclient/discovery_cache/documents/versionhistory.v1.json index ef684568fa4..42146e6b8b7 100644 --- a/googleapiclient/discovery_cache/documents/versionhistory.v1.json +++ b/googleapiclient/discovery_cache/documents/versionhistory.v1.json @@ -271,7 +271,7 @@ } } }, -"revision": "20240526", +"revision": "20240602", "rootUrl": "https://versionhistory.googleapis.com/", "schemas": { "Channel": { @@ -471,6 +471,10 @@ "description": "Release name. Format is \"{product}/platforms/{platform}/channels/{channel}/versions/{version}/releases/{release}\"", "type": "string" }, +"pinnable": { +"description": "Whether or not the release was available for version pinning.", +"type": "boolean" +}, "serving": { "$ref": "Interval", "description": "Timestamp interval of when the release was live. If end_time is unspecified, the release is currently live." diff --git a/googleapiclient/discovery_cache/documents/vision.v1.json b/googleapiclient/discovery_cache/documents/vision.v1.json index be1c749e630..68ad473ce30 100644 --- a/googleapiclient/discovery_cache/documents/vision.v1.json +++ b/googleapiclient/discovery_cache/documents/vision.v1.json @@ -1282,7 +1282,7 @@ } } }, -"revision": "20240517", +"revision": "20240524", "rootUrl": "https://vision.googleapis.com/", "schemas": { "AddProductToProductSetRequest": { diff --git a/googleapiclient/discovery_cache/documents/vision.v1p1beta1.json b/googleapiclient/discovery_cache/documents/vision.v1p1beta1.json index 034c0209049..2f96c8cdbef 100644 --- a/googleapiclient/discovery_cache/documents/vision.v1p1beta1.json +++ b/googleapiclient/discovery_cache/documents/vision.v1p1beta1.json @@ -449,7 +449,7 @@ } } }, -"revision": "20240517", +"revision": "20240524", "rootUrl": "https://vision.googleapis.com/", "schemas": { "AnnotateFileResponse": { diff --git a/googleapiclient/discovery_cache/documents/vision.v1p2beta1.json b/googleapiclient/discovery_cache/documents/vision.v1p2beta1.json index c6038c788af..d95f0fd9fc7 100644 --- a/googleapiclient/discovery_cache/documents/vision.v1p2beta1.json +++ b/googleapiclient/discovery_cache/documents/vision.v1p2beta1.json @@ -449,7 +449,7 @@ } } }, -"revision": "20240517", +"revision": "20240524", "rootUrl": "https://vision.googleapis.com/", "schemas": { "AnnotateFileResponse": { diff --git a/googleapiclient/discovery_cache/documents/vmmigration.v1.json b/googleapiclient/discovery_cache/documents/vmmigration.v1.json index d62daa29ab9..7d135c73ea6 100644 --- a/googleapiclient/discovery_cache/documents/vmmigration.v1.json +++ b/googleapiclient/discovery_cache/documents/vmmigration.v1.json @@ -2220,7 +2220,7 @@ } } }, -"revision": "20240516", +"revision": "20240523", "rootUrl": "https://vmmigration.googleapis.com/", "schemas": { "AccessKeyCredentials": { diff --git a/googleapiclient/discovery_cache/documents/vmwareengine.v1.json b/googleapiclient/discovery_cache/documents/vmwareengine.v1.json index 1174cf318a5..1b19f2c8eeb 100644 --- a/googleapiclient/discovery_cache/documents/vmwareengine.v1.json +++ b/googleapiclient/discovery_cache/documents/vmwareengine.v1.json @@ -3173,7 +3173,7 @@ } } }, -"revision": "20240424", +"revision": "20240509", "rootUrl": "https://vmwareengine.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/walletobjects.v1.json b/googleapiclient/discovery_cache/documents/walletobjects.v1.json index 33b30f04510..19ed139135a 100644 --- a/googleapiclient/discovery_cache/documents/walletobjects.v1.json +++ b/googleapiclient/discovery_cache/documents/walletobjects.v1.json @@ -2681,7 +2681,7 @@ } } }, -"revision": "20240523", +"revision": "20240603", "rootUrl": "https://walletobjects.googleapis.com/", "schemas": { "ActivationOptions": { diff --git a/googleapiclient/discovery_cache/documents/webfonts.v1.json b/googleapiclient/discovery_cache/documents/webfonts.v1.json index ea22e30065a..51b20d45dfd 100644 --- a/googleapiclient/discovery_cache/documents/webfonts.v1.json +++ b/googleapiclient/discovery_cache/documents/webfonts.v1.json @@ -161,7 +161,7 @@ } } }, -"revision": "20240522", +"revision": "20240528", "rootUrl": "https://webfonts.googleapis.com/", "schemas": { "Axis": { diff --git a/googleapiclient/discovery_cache/documents/webrisk.v1.json b/googleapiclient/discovery_cache/documents/webrisk.v1.json index f11892e7c49..8bb0d53c1ff 100644 --- a/googleapiclient/discovery_cache/documents/webrisk.v1.json +++ b/googleapiclient/discovery_cache/documents/webrisk.v1.json @@ -420,7 +420,7 @@ } } }, -"revision": "20240519", +"revision": "20240603", "rootUrl": "https://webrisk.googleapis.com/", "schemas": { "GoogleCloudWebriskV1ComputeThreatListDiffResponse": { diff --git a/googleapiclient/discovery_cache/documents/websecurityscanner.v1.json b/googleapiclient/discovery_cache/documents/websecurityscanner.v1.json index e15449a1363..fc9b852d55a 100644 --- a/googleapiclient/discovery_cache/documents/websecurityscanner.v1.json +++ b/googleapiclient/discovery_cache/documents/websecurityscanner.v1.json @@ -526,7 +526,7 @@ } } }, -"revision": "20240516", +"revision": "20240529", "rootUrl": "https://websecurityscanner.googleapis.com/", "schemas": { "Authentication": { diff --git a/googleapiclient/discovery_cache/documents/websecurityscanner.v1alpha.json b/googleapiclient/discovery_cache/documents/websecurityscanner.v1alpha.json index 86025e8dccf..f2a9c463e96 100644 --- a/googleapiclient/discovery_cache/documents/websecurityscanner.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/websecurityscanner.v1alpha.json @@ -526,7 +526,7 @@ } } }, -"revision": "20240516", +"revision": "20240529", "rootUrl": "https://websecurityscanner.googleapis.com/", "schemas": { "Authentication": { diff --git a/googleapiclient/discovery_cache/documents/websecurityscanner.v1beta.json b/googleapiclient/discovery_cache/documents/websecurityscanner.v1beta.json index 1b915160ff0..5e32aa2fc12 100644 --- a/googleapiclient/discovery_cache/documents/websecurityscanner.v1beta.json +++ b/googleapiclient/discovery_cache/documents/websecurityscanner.v1beta.json @@ -526,7 +526,7 @@ } } }, -"revision": "20240516", +"revision": "20240529", "rootUrl": "https://websecurityscanner.googleapis.com/", "schemas": { "Authentication": { diff --git a/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json b/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json index f059e7a90bd..70c8f21664b 100644 --- a/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json +++ b/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json @@ -457,7 +457,7 @@ } } }, -"revision": "20240507", +"revision": "20240528", "rootUrl": "https://workflowexecutions.googleapis.com/", "schemas": { "Callback": { @@ -1006,6 +1006,11 @@ "description": "StepEntryMetadata contains metadata information about this step.", "id": "StepEntryMetadata", "properties": { +"expectedIteration": { +"description": "Expected iteration represents the expected number of iterations in the step's progress.", +"format": "int64", +"type": "string" +}, "progressNumber": { "description": "Progress number represents the current state of the current progress. eg: A step entry represents the 4th iteration in a progress of PROGRESS_TYPE_FOR.", "format": "int64", diff --git a/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json b/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json index 3ae8ae9366f..988f3c7e6e5 100644 --- a/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json +++ b/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json @@ -269,7 +269,7 @@ } } }, -"revision": "20240507", +"revision": "20240528", "rootUrl": "https://workflowexecutions.googleapis.com/", "schemas": { "CancelExecutionRequest": { diff --git a/googleapiclient/discovery_cache/documents/workflows.v1.json b/googleapiclient/discovery_cache/documents/workflows.v1.json index a316eefc179..3cb57bc4a9b 100644 --- a/googleapiclient/discovery_cache/documents/workflows.v1.json +++ b/googleapiclient/discovery_cache/documents/workflows.v1.json @@ -485,7 +485,7 @@ } } }, -"revision": "20240508", +"revision": "20240522", "rootUrl": "https://workflows.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/workflows.v1beta.json b/googleapiclient/discovery_cache/documents/workflows.v1beta.json index 993b5acf570..02765016a36 100644 --- a/googleapiclient/discovery_cache/documents/workflows.v1beta.json +++ b/googleapiclient/discovery_cache/documents/workflows.v1beta.json @@ -444,7 +444,7 @@ } } }, -"revision": "20240508", +"revision": "20240522", "rootUrl": "https://workflows.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/workspaceevents.v1.json b/googleapiclient/discovery_cache/documents/workspaceevents.v1.json index 4e1b9e53f12..4bf408a582e 100644 --- a/googleapiclient/discovery_cache/documents/workspaceevents.v1.json +++ b/googleapiclient/discovery_cache/documents/workspaceevents.v1.json @@ -424,7 +424,7 @@ } } }, -"revision": "20240521", +"revision": "20240528", "rootUrl": "https://workspaceevents.googleapis.com/", "schemas": { "ListSubscriptionsResponse": { diff --git a/googleapiclient/discovery_cache/documents/youtube.v3.json b/googleapiclient/discovery_cache/documents/youtube.v3.json index 992dc54c51e..466fdee87f0 100644 --- a/googleapiclient/discovery_cache/documents/youtube.v3.json +++ b/googleapiclient/discovery_cache/documents/youtube.v3.json @@ -4072,7 +4072,7 @@ } } }, -"revision": "20240521", +"revision": "20240602", "rootUrl": "https://youtube.googleapis.com/", "schemas": { "AbuseReport": { diff --git a/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json b/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json index 33c586447b2..3d94335f64f 100644 --- a/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json +++ b/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json @@ -421,7 +421,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://youtubeanalytics.googleapis.com/", "schemas": { "EmptyResponse": { diff --git a/googleapiclient/discovery_cache/documents/youtubereporting.v1.json b/googleapiclient/discovery_cache/documents/youtubereporting.v1.json index 105242acb8d..2e300ac8b51 100644 --- a/googleapiclient/discovery_cache/documents/youtubereporting.v1.json +++ b/googleapiclient/discovery_cache/documents/youtubereporting.v1.json @@ -411,7 +411,7 @@ } } }, -"revision": "20240522", +"revision": "20240602", "rootUrl": "https://youtubereporting.googleapis.com/", "schemas": { "Empty": { From a999ad0d152d6404d379c8332bc27abfe85ba7d7 Mon Sep 17 00:00:00 2001 From: "release-please[bot]" <55107282+release-please[bot]@users.noreply.github.com> Date: Tue, 4 Jun 2024 10:58:58 -0400 Subject: [PATCH 2/2] chore(main): release 2.132.0 (#2412) Co-authored-by: release-please[bot] <55107282+release-please[bot]@users.noreply.github.com> --- CHANGELOG.md | 43 ++++++++++++++++++++++++++++++++++++++ googleapiclient/version.py | 2 +- 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 74fff96275a..82642fddf52 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,48 @@ # Changelog +## [2.132.0](https://github.com/googleapis/google-api-python-client/compare/v2.131.0...v2.132.0) (2024-06-04) + + +### Features + +* **aiplatform:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/9d6000fa065ac1ef877de37b94a5e923c89b8228 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **analyticsadmin:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/494a29d2266725566e185c41e19c08419c88f9b4 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **androidmanagement:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/5afc4010f2f7d303ba0b3a812aab7496aea97adb ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **backupdr:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/5bcc5d39d04aa4691e36cc57b256d983ec52159b ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **chromemanagement:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/32ddf526ff40d30f20f9116027a4f208f38cc792 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **cloudbilling:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/2b5c66b2c5d2ffaa649dd9455da765e10dbce113 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **cloudfunctions:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/34314fb79a2ef113f2f1db15738f2d2e29887222 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **cloudsearch:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/d32e900aeae99a2d7cab64037a2a0d8285aba8b6 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **compute:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/4f7da21c3c67d1019b996492e5dfc9dcacb38214 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **connectors:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/8087f14f8942261881ea87bf47fba512a78a9fc1 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **contactcenteraiplatform:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/4fb577d2d6e2851c8d923066c9ff7b5c1e9df79e ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **contactcenterinsights:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/bb49784a9cb793ff64c8e1d4ee3b98a173b4e31d ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **datamigration:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/ac474a90aeb6d2443b12c1bf891c7fb81dbcb9ed ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **dataplex:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/d959b3d78c7034bbc3571d9ede7d6de3587989f7 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **datastream:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/3abd0f41f2e617749aba78913cb4fa6391df55a8 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **dialogflow:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/2d79840e8bfc7aa3bee79b9554627dfd1cb13121 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **discoveryengine:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/4522cd5e31c6437d52d8d8a09a54cf2c38fb7dcf ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **documentai:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/a06827efcc41fe6af56f687f7c1dc4f8538a166b ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **fcmdata:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/f7c50fd9f7b75df93ef9775684cba47b66cb0c81 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **firebaseappcheck:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/0744228b03e4c38e64358d9b38c17b2df3e2871e ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **healthcare:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/05c4657fa6322067b421e9e0d887904faba04811 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **iam:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/331029f3a230aa25f32a75b9e81adf9d6ed97ed5 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **integrations:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/8bd4954709fc4bea245abd2efca870e8fdbc2c40 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **migrationcenter:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/b46b8b7081691a40f80241bfa154acc6d46abc9d ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **networkconnectivity:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/ff49e0b244002d44580f689e0a3f77175bbe5dfb ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **policyanalyzer:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/b56b2b1453126a06a9bcba1c96766a905006d3a7 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **resourcesettings:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/a5e25b381450da4c88bf86d24550fa7a75f4636a ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **run:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/81892c895bfe7d8b5a60a1ce7c62f6bbd603a7b0 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **servicecontrol:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/0cfcab3609ec38a84d245cc3207cedc6ec92db5a ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **spanner:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/195cae366ac9c01537584735879ef5ae658efee2 ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **versionhistory:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/9cef71c5a52655e5e37b51ac0a430801c2cd97bd ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) +* **workflowexecutions:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/6670b1ea9d65e7574d77954cfd1722736bfa5d1c ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) + + +### Bug Fixes + +* **secretmanager:** Update the api https://togithub.com/googleapis/google-api-python-client/commit/d0199eaf1f51289ad13683a54b6b26a5019b560d ([9401d1f](https://github.com/googleapis/google-api-python-client/commit/9401d1f1f16ed979d217e540aa044f430699aa4d)) + ## [2.131.0](https://github.com/googleapis/google-api-python-client/compare/v2.130.0...v2.131.0) (2024-05-28) diff --git a/googleapiclient/version.py b/googleapiclient/version.py index e7c8e23cbad..a9ac4c31ed9 100644 --- a/googleapiclient/version.py +++ b/googleapiclient/version.py @@ -12,4 +12,4 @@ # See the License for the specific language governing permissions and # limitations under the License. -__version__ = "2.131.0" +__version__ = "2.132.0"